An artificial intelligence primer – from machine learning to computer vision



Artificial intelligence has the potential to impact almost every area of life. In this first of a two-part series explaining the technology behind the headlines, this article looks at the different branches of AI technology, and what they can do

When we think of artificial intelligence (AI), most of us teeter between excitement and concern about its rise. And with AI, just like anything, the unknowns fuel our concerns.

AI and generative AI are unleashing amazing opportunities that will enable governments to be much more productive and effective – getting more done – better, faster, and easier. These technologies will enable us to run virtual simulations before taking real actions, prevent adverse events, prepare for changing conditions, detect areas of concern sooner and with greater accuracy, engage in more meaningful ways, and manage our resources better.

So, what is AI?

Artificial intelligence is the science of designing systems to support and accelerate human decisions and actions. These systems perform tasks that have historically required human intelligence But, it’s called artificial intelligence for a reason: the simulation of human intelligence is performed by machines that have been programmed to learn and think. AI does not replace humans; it augments and accelerates what we do and how we do it, increasing overall efficiency and productivity.

When we talk about the different types of AI, we sometimes refer to them as “branches of AI.” Each branch performs different types of tasks. Three of the traditional branches of AI used by governments are machine learning, computer vision, and natural language processing. These three branches of AI are interconnected and often overlap, with advancements in one area often influencing progress in others.

And, generative AI – or GenAI – is a subset of deep learning, which in turn is a subset of Machine Learning. Three technologies within GenAI are large language models (referred to as LLMs), Synthetic Data, and Digital Twins.

For those of you who have been hearing a lot about or using ChatGPT or Copilot, these are built on an LLM.

Before we talk about generative AI, let’s discuss traditional AI technologies and how they work.

Machine learning

Machine learning systems learn from data, identify patterns, and make decisions with minimal human intervention.

You may have taken a computer class at some point in which you wrote conditional, or If-Then, statements.

For example, an estate agent might say that “if the property is adjacent to a lake, increase its value by 10%.”

But machine learning does not require you to write “if then” statements. Machine learning models learn from the data that is fed into it. – and the more data you feed the model, the more accurate the model becomes.

The machine is able to ingest massive amounts of data, extract key features, determine a method of analysis, write the code to execute that analysis, and produce an intelligent output – all through an automated process.

For example, imagine a computer assessing the value of properties. The computer considers thousands of properties. It compares properties next to water features against those that are not. From the data that it reads, the computer determines that properties adjacent to lakes are 11% more valuable than those that are not. The rule does not become a fixed rule. In fact, any change to the data fed into the system will change the rules and the output. Typically, the more data that a system processes, the more refined the answers become.

Deep Learning

Deep learning is a subset of machine learning that teaches computers to process data in a way that is inspired by the human brain. In the same manner that the neurons in the brain send information between brain cells, layers of nodes in deep learning work together to process data and solve problems. Deep Learning can be compared to the process of teaching a child to recognize animals through layers of learning, constant testing and correction, and enough diverse examples to ensure he can generalize to new situations. Deep Learning, like the child, improves with practice, refining its understanding with each new example. Deep learning is used for Natural Language Processing, Computer Vision, and Generative AI.

Natural language processing

Natural language processing enables understanding, interaction and communication between humans and machines.

NLP makes it possible for computers to read text, hear speech, interpret it, measure sentiment, and determine which parts are important. The overarching goal is to take raw language input and use linguistics and algorithms to transform or enrich the text in such a way that it delivers greater value.

Natural language processing goes hand in hand with text analytics, a machine learning technique that counts, groups, and categorizes words to extract structure and meaning from large volumes of content.

All these branches of AI contribute to one another. The computer can augment human efforts to analyse unstructured text with AI using a combination of natural language processing, machine learning, and linguistic rules. NLP and text analytics are used together for many applications, including investigative discovery, subject-matter expertise, and social media analytics.

For example, crime investigations typically involve a massive amount of intelligence reports. Not only are these reports extremely time consuming to read, the process of extracting key people, addresses, phone numbers, and relationships that are pertinent evidence to a case can be cumbersome. New information learned from a crime report demands scouring previously-read reports, making the process repetitive and lengthy.

Using ML, the people, places, events, objects, phone numbers, and email addresses can be extracted out of long-form text like crime reports and put into tables. This expedites the discovery of information.

Applying linguistics and analytics, an NLP system can extrapolate nuances such as sentiment from sentences within a report. This is accomplished by discerning the syntax – structure, arrangement, and order of words and phrases , semantics –the meaning of words, phrases, and sentences, and the “discourse” – the analysis of language that focuses on how language is used in context to convey meaning.



#artificialintelligence
#machinelearning
#deeplearning
#computervision
#neuralnetworks
#ai
#dataanalysis
#datascience
#bigdata
#algorithms
#patternrecognition

Comments

Popular posts from this blog

WhatsApp Chat GPT Integration via GitHub Project Lets Users Add AI to Conversations: Details

Highway technology

Veterianary