Machine learn to think like Holmes if one trains brain properly. But it isn’t easy, as one will have to acquire a natural scepticism and inquisitiveness towards the real-world. Most of us have a tendency to make instant, crass judgments like that fool Watson

Sherlock Holmes displays powerful lines of logic and reasoning capabilities. His trick is to treat every thought, every experience and every perception the way he would a pink elephant. In other words, begin with a healthy dose of skepticism instead of the credulity that is mind’s natural state of being. This requires mindfulness – constant presence of mind, the attentiveness and hereness that is so essential for real active observation of the world. If we want to think like Sherlock Holmes, “we must want, actively, to think like him.” 

Knowledge management and information integration have renewed the interest in advanced logic-based formalisms for knowledge representation, reasoning and inferencing in a format that can be used by computers to solve problems. This process involves representing knowledge in a formal language that can be interpreted by a computer program, and using reasoning algorithms to solve problems. There are a number of issues that can arise when knowledge reasoning and inferences are used in applications. One issue is that of ambiguity: when multiple pieces of information are represented, it can be difficult to determine which piece of information is relevant to a particular situation. This can lead to incorrect inferences being made. Another issue is that of incompleteness: if some information is not represented, then it may be impossible to make certain inferences. This can lead to unexpected results or behavior. Finally, knowledge representation and reasoning must be designed in a way to become computationally efficient, particularly if the knowledge base is large and complex. This can limit the applicability of legacy as well as new systems that use these techniques.

It is a way for the machine to understand human knowledge, expressions and meanings, not the other way around. The meaning of sentences in a knowledge-base, defined in a semantic account, while specialised inference procedures are developed for the given semantics, typically so that they are sound and complete, the characteristics that establish their formal adequacy for reasoning. Knowledge representation is the process of encoding this knowledge in a format that can be used by computers. In the declarative paradigm, the information in a knowledge base represents what humans know about a domain but does not constrain how to reason with this information: what types of reasoning tasks one might want to perform and how to execute them. For example, in a simple knowledge-expert system in the form of a relational database, we have a declarative specification of the knowledge given by the set of atomic details in the language determined by the relational schema. This representation leaves open what reasoning tasks (queries) the user might want to execute, and how they are actually executed once posed to the system.

Next, a knowledge-expert system is, in principle, able to explain and justify its behaviour, based on its knowledge-base and the inference steps it took to arrive at a conclusion. For example a knowledge system may indicate that a specific brand was not recommended for a consumer, even though the brans is preferred against the buyer’s need, because the consumer is allergic to the class of products to which the proposed brand belongs. Another important consideration concerns the type of information one may want to include in a knowledge-based system. Often it is factual information pertaining to a particular application domain. However, a knowledge base may include any additional pertinent information. This can include the agent’s (human or machine) knowledge, goals, obligations, and preferences, or that of other agents. This can include hypothetical or counterfactual information, and assertions regarding the past or future, and also information about actions available to the agent (or agents).

Intelligence has several knowledge representations for saliency on a choice-set and/or dynamic inferences – one becomes formalized knowledge to be used in other situations – mathematically expressed as monad and paired-comparisons; other becomes an inferencing knowledge to be applied as the experience in other situations – as mathematically expressed as diffusion and diffusion dynamics. 

Language, in this context, only adds to preverbal logic that might still differ from reasoning abilities that emerge once language comes along, as language may open additional reasoning abilities unavailable to the speechless brain (babies, unknown-unfamiliar language interactions, indigenous/tribal languages, strate or substrate, musicians, pets, animals, etc.).

Imagine now that you are Holmes and you are observing me, Maria, for the first time. What might you deduce? On second thoughts, let’s not go there. Rather, let us consider the three-pipe problem. How do we recognise it? The answer is more simple than you may think. By smoking three pipes.