A machine learns to think like Holmes if one trains brain properly – a learner needs to transform a problem to inner quantitative, qualitative and phenomena-like representation. But it isn’t easy, as one will have to acquire a natural scepticism and inquisitiveness towards the real-world. Most of us have a tendency to make instant, crass judgments like that fool Watson

Sherlock Holmes displays powerful lines of logic and reasoning capabilities. His trick is to treat every thought, every experience and every perception the way he would a pink elephant. In other words, begin with a healthy dose of skepticism instead of the credulity that is mind’s natural state of being. This requires mindfulness – constant presence of mind, the attentiveness and hereness that is so essential for real active observation of the world. If we want to think like Sherlock Holmes, “we must want, actively, to think like him.”

The primary tenet of representation is that learners actively construct their own knowledge based on prior knowledge at a particular timepoint on novice-expert spectrum and, therefore, can be thought of as a continuum.

A subject’s problem representation characteristics that are a double-loop inner problem representation learning model on which decision depends, is the way to form conditions, unknowns and combinations already used (quantitative lattice); determine what facts to use, and when and how creatively use (qualitative lattice); and practical or phenomenal life (option lattice) problem, including “out-of-context” problem-solving

Each model represents a possibility. Models represent relationships among multi-dimensional entities or abstract entities; they can be static or dynamic. They underlie visual images, though many components of models are not visualizable. Each model is iconic, that is, its parts correspond to the parts of what represents, and its structure corresponds to the structure of the possibility. The iconic nature of the model yields a conclusion over and above the propositions used in constructing the model.

On this continuum, representation of knowledge – as lattice network as the series impedances and shunt impedances both occur twice, an arrangement that offers increased flexibility to the actions with a variety of responses achievable – is most closely aligned with cognitive development and adaptive process.

It is a way for the machine to understand human knowledge, expressions and meanings, not the other way around. The meaning of sentences in a knowledge-base, defined in a semantic account, while specialised inference procedures are developed for the given semantics, typically so that they are sound and complete, the characteristics that establish their formal adequacy for reasoning. Knowledge representation is the process of encoding this knowledge in a format that can be used by computers. In the declarative paradigm, the information in a knowledge base represents what humans know about a domain but does not constrain how to reason with this information: what types of reasoning tasks one might want to perform and how to execute them. For example, in a simple knowledge-expert system in the form of a relational database, we have a declarative specification of the knowledge given by the set of atomic details in the language determined by the relational schema. This representation leaves open what reasoning tasks (queries) the user might want to execute, and how they are actually executed once posed to the system.

In particular, agent learn how the very way they go about defining and solving problems can be a source of problems in its own right.

Double-loop learning is used when it is necessary to change the inner-representation model on which a decision depends. Unlike single loops, this model includes a shift in understanding, from simple and static to broader and more dynamic, such as taking into account the changes in the surroundings and the need for expression changes in inner-representation models.

An effective double-loop learning is not simply a function of how humans (or machines) internalize the problem. It is a reflection of how they think – that is, the cognitive rules or reasoning they use to design and implement their actions. Think of these rules as a kind of “master program” stored in the memory (and logic programs), governing all behavior. Defensive reasoning can block learning even when the individual commitment to it is high, just as a computer program with hidden bugs can produce results exactly the opposite of what its designers had planned. A knowledge-expert system is, in principle, able to explain and justify its behaviour, based on its knowledge-base and the inference steps it took to arrive at a conclusion. Another important consideration concerns the type of information one may want to include in a knowledge-based system. This can include the agent’s (human or machine) knowledge, goals, obligations, and preferences, or that of other agents. This can include hypothetical or counterfactual information, and assertions regarding the past or future, and also information about actions available to the agent (or agents).

Intelligence has several knowledge-complex problem representations for saliency on a choice-set and/or dynamic inferences – one becomes formalized knowledge to be used in other situations – mathematically expressed as monad and paired-comparisons; other becomes an inferencing knowledge to be applied as the experience in other situations – as mathematically expressed as diffusion and diffusion dynamics.

Language, in this context, only adds to preverbal logic that might still differ from reasoning abilities that emerge once language comes along, as language may open additional reasoning abilities unavailable to the speechless brain (babies, unknown-unfamiliar language interactions, indigenous/tribal languages, strate or substrate, musicians, pets, animals, etc.).

Imagine now that you are Holmes and you are observing me, Maria, for the first time. What might you deduce? On second thoughts, let’s not go there. Rather, let us consider the three-pipe problem. How do we recognise it? The answer is more simple than you may think. By smoking three pipes.