AN APPROACH TO ARTIFICIAL INTELLIGENCE

We are giving birth to a new kind but it’s not human. Like human intelligence, It possess its own intelligence and  can take its own decision and evolve. Human term it as Artificial Intelligence (A.I). Unlike human, They have access to their own source code and are able to modify them. In other words, it’s the capability of a machine to imitate intelligent human behavior. There have been great discussion on machine could take over the human race and dominate the human race. Let the future decide that thing. We are talking about how we are contributing or developing a new kind to give it its own intelligence.
Intelligence is the ability to learn or understand . A.I have ability to take its own decision depending on the situation. They can deal with new or trying situations. It can also be viewed as ability to think abstractly. So how we are putting sense to machines? What technology are we developing that is contributing to A.I? Is that technology sufficient?
There have been lot of developments going on around the world. Human race has developed so much in the field of technology which has made us much more proficient, powerful and prevailing on this planet. Human  are giving this gift of nature to a new kind, Machines. The artificial intelligence, developed by human, is not the perhaps thing on future, but it’s inevitable.

But how?. The ability of our unconscious mind to group things into patterns based on visual clues is what allows us to process visual information so quickly and efficiently. Understanding & applying the model of how the human mind processes information is one of the key elements. Human body is a system that co-ordinate simultaneously. It is the most efficient system of nature, after the nature itself.  Our different  organs, with each having its own function, work parallel   to makes us capable. Similarly, we are developing organs for the machines and later coordinating them together to form the intelligent system. Putting sense inside the machine is not like teaching a child that touching the fire will hurt because its hot. Machine understand everything in terms of bits and bytes. To be more specific, current and voltage. We talk to them through some kind of languages, Programming languages or software that are translated to machine level language (1’s & 0’s) and make them do.

Software are becoming more and more powerful that controls the advanced hardware’s of today. software are replacing the physical objects to virtual i.e. machine level. Software are the medium through which we interact with machines.  Imagine a decade ago,  we used to have a movie cassette and now we have a digital format that are portable with lots of benefits and improvements on quality.  We’ve powerful computers that are capable of performing calculations much faster than the whole human race combined.  The power of calculation of computers are increasing exponentially not linearly.  That means, The advancement in hardware and software are not defined by some linear factors but are exponential.
Talking about organs of the machines that imitate humans, I am presenting the paradigm of  basic technology  products  that we are familiar with our daily lives.  We humans have sense organs, a bodily structure that receives a stimulus which convey specific impulses to central nervous system where they are interpreted as corresponding sensations. Basically there are five sense organs: eyes, ear, nose, mouth and skin that help us to see, hear, smell, talk and feel. They are the receptors to external stimuli those are transferred to central nervous system and analyzed. Giving this sensation to computer is tough task. We are not developing the whole system with intelligence in it at once rather we are developing the technology as a independent function that defines the system, similar to the logic of mathematics.
Vision: To understand anything you must have an image of something in your mind. Our mind stores each and everything we know or see as a pattern. So, whenever we recall something the pattern is revised and the information related to it that is stored in our mind is revoked. “Elephant”. Now when you went through the previous sentence , you might have got the picture of elephant in your mind. So, how can we make the system capture the environment. Do we have that technology…? yes, of course.
I think it might be in your pocket or hand right now. It’s a camera. Imagine how easy it is to take a snapshot but how complex it is to transform the real world in terms of bits. Imagine a tree that are being represented by some sequence of codes and algorithms. Now, you have two different definitions for the same thing: physically and virtually. We are not going through the process of some conversions, mechanism and techniques in this article. We are just realizing only. So it’s like an eye to the computer that can see. But do they interpret it as a tree or just as a sequence of code. I hope you all use Google that recently upgraded its search feature with ‘image as keyword’. Just upload image and it will show results related to it. The algorithm is capable of understanding the physical objects on the file not only the sequence of codes.
Listen:  We all record songs on our computer with our beautiful voice.  Similar to human ear, microphone are capable of hearing sounds. Technically speaking, they are able to convert the analog signal to digital signal. How about interpreting them? Again, to answer this question , I have Google. The voice search feature in search engine is able to understand what you speak and display the result related to that. Not only this, it is able to understand almost any human languages of the world.
Talk:  It’s important to communicate.  Today’s computers are communicable both with humans and computers. The communication between multi computers is known as networks. They can speak and understand languages. They can read your documents, emails etc. Technically, it is known as Text-to-Speech that you may find in software feature such as Microsoft office package.
Smell & Feel: The three important sense that must be possessed is described above. But smell and feel are also necessary just like the blind people use their hands to feel the objects to know about its structure and identity. These are required if one has improper functioning of organ. We design the machine or system with all above features than this  might not be so important to make them able to understand the objects. You can define the feature and characteristics of something, and the web will show the thing you might be looking at. If you asked for something that is round, used in sports, played by foot than it might come up with result: Football.
These are the technology that we are developing independently that are capable of making a system that can imitate human body. But is that technology sufficient?. No. Because the main characteristics feature of Artificial Intelligent Machine is the ability to take decision. Currently, we can develop a system or computer that can take decision depending on the some pre-defined conditions that are self-feed by humans. How many self pre-defined conditions can we feed to computers? We even don’t know how many and what conditions to feed. There are infinite conditions and situations that have many solutions. What will you do when you see a tiger in front of you? You run. I don’t think you’ll run when its actually an photograph of tiger or you may run if you are crazy enough or you may cry of fear if you are child. That’s what I am talking about. Imagine the same conditions with computers. So are we developing the system that are capable of making decisions? Of course. Some examples are wolfram alpha, semantic search etc.

Wolfram alpha is an answer engine rather than search engine. It answers factual queries directly by computing the answer from structured data, rather than providing a list of documents or web pages that might contain the answers as a search engine might. Now this means we have already developed a system that is able to compute or learn. It is also capable of responding to particularly phrased natural-language fact-based questions such as “how old was queen Elizabeth II in 1974?”. Imagine where will this developments go in coming years making it more advanced? If at present they are capable of understanding natural language than in near future they will be able to talk like human responding to current situation or context.

 Semantic search seeks to improve search accuracy by understanding searcher intent ant the contextual meaning in terms as they appear in the searchable data space, whether on the web or within a closed system, to generate more relevant results. Semantic search system consider various points including context of search, location, intent, variation of words, synonyms, generalized and specialized queries, concept matching and natural language queries. That means search results are displayed depending on various factors, depending on situations and conditions. i.e. system are able to take decision to the external stimuli or conditions provided by user.

 It’s not like we are taking a giant leap, we  are developing small pieces that are contributing to whole.
(Article further on development…)