Today, we will explain ChatGPT for beginners and how it works technically.
ChatGPT managed to shock the world, and as of 2023, we will try to understand its technical side better.
Many people think that AI research has been around for the past two decades, which is false. AI research has been around for over eighty (80) years, but conversational AI has immensely advanced in the past 2 decades.
During the 80 years of AI research, many attempted to crack the code for conversational AI for decades, but they all failed.
There are several reasons why cracking this code has been difficult:
- Human natural language is not a precise science
- Human natural language has nuances in the grammatical structure and the context of your sentence
- Conversations are fundamentally context bounded
Therefore, conversational AI has its limitations.
How Was Conversational AI (ChatGPT) cracked
The AI scientist used a program that is completely reactionary; for example, “A user logs into ChatGPT website and then starts typing paragraphs something that they want to discuss with ChatGPT, and then the program takes all the text data that the user typed, runs it through the program and then gives out an output.” So, this makes it entirely reactionary (there’s the input part, and then there’s the generated response part). This is all there is to this AI program that scientists use.
But, this is too complex to program in computers or implements in Artificial Intelligence, so how was this simplified?
They took the brain and neurons as examples for this to be possible, as the program had to generate responsive answers (output). As the brain is too complicated to implement, they used “one (1) single neuron” to simplify the concept. Let’s assume they took one neuron and started with one input (even though they can have multiple inputs). The input can receive electrical signals from zero (0) to nine (9), zero (0) being very weak and nine (9) being a powerful signal. Let’s assume that this single neuron gets a signal that is four (4), and this particular neuron, when it receives four (4) it, will output two (2), four (4), six (6), and eight (8). Because it’s got four connections to four different neurons, and this neuron’s output will connect to four other neurons; therefore, the next neuron is going to receive 2,4,6 and 8, and if all the neurons connect like this in the computer program, are we mimicking how the brain functions? Yes, we are. This is how conversational AI is capable of giving humans understandable answers.
AI image recognition and its importance
Now, image recognition is what made AI a thing; this is where we started seeing the importance of using neuron networks.
Scientists took a set of images of hundreds and thousands of dogs, birds, and cats. Then, they present the picture of the bird, and then they convert it into electrical signals and convert these signals into the initial input layer, where you would have hundreds and thousands of neurons in order for you to decipher a dog or a cat.
You’ve got the input layer, and then they will all light up and activate, then pass on signals to the link on the next layer, and then they would repeat this process of passing signals to the next layer, next layer until these neurons give three outputs in this case; for the dog, the cast, and the bird.
Input layer > Lights up > passes on signals to next layer > repetitive process > neurons give output > Output Layer
What is AI Training and How Does it Work?
We will take the above example of a dog, bird, and cat. In this case, we have hundreds and thousands of images of each animal. So initially, you have these neurons that are connected in this case, and each neuron has a connection to every single neuron in the next layer; some neurons will decide to activate some of the subsequent neurons and ignore a few. So the idea when it comes to training is you have a training data set of thousands of pictures of dogs, birds, and cats, and you tell the neuron networks that the outcome wasn’t the correct answer, and you tell it that it needs to change its activation behavior. Then you will feed them in the images again and then test whether it has given the right answer or not, and this is a repetitive process until it has given you the correct answer.
When you are training it, what’s happening within the hidden layers, which is at first totally random, afterward it starts to have specific patterns, so if you put a picture of a dog, you would have certain neurons lighting up, and then it would eventually give an output, and that would be training it. Eventually, the activations and connections are the smarts inside an AI.
In this particular design of a neural network where you have initially input layers, then following layers of neurons (hidden layers), and then activating output layer of neurons is especially good for image recognition. But, this particular neural network pattern miserably fails in other tasks, specifically in natural language.
ChatGPT Was Made From Google Papers
Google is supposedly the most advanced AI scientist; open AI scientists learned from Google Papers.
The Google people realized that they could come up with more than just one pattern of neural networks by structuring it so that sometimes it could even have an output of a neural network feeding back to the previous layer. Instead of having one directional signal, they made it possible to come back to your neuron and feed the previous layer, the answers that came out of these patterns were fascinating.
They found out that these different patterns of networking of the neurons, which observed the patterns from our neurons in our brain and simplified these patterns, and they seemed to work well in computer programs if you want to understand more about how these neural networks function.
How Was ChatGPT Made?
When the humans type in the input like the chat message, it comes and hits the initial part of the ChatGPT’s neural network that tries to understand the context of the input, and then the next part generates the answer. So, they have broken it down to understanding the input and generating the answer. These neural networks have very complex patterns, and it depends on the way you train them.
It is very similar to how you teach a baby to speak a certain language. You see parents, uncles, and family members constantly talking to the baby and feeding the baby information that is unfamiliar to him/her, the baby is receiving all this information, and they create patterns in their brain, which is basically how scientists train these random simulated neural networks. The chaotical and unsupervised information that was fed to the baby changes when they enter school, and they get guided by the teacher or its exam scores, what is a better answer or what is a false answer. Basically, entirely supervised learning creates certain patterns with guidance-based education, and ChatGPT does the same.
When they are training the neural networks, there is no human involvement; they must gather all kinds of information all over the internet and feed it to the neural networks, and the neural networks will start to find rough patterns. Open AI claims that they scraped all the text to websites and blog posts and everything that is text-based; they scraped it all and just dumped it in this neural network. This neural network was trained and unsupervised, and it can find patterns given with any input text that the user from the ChatGPT website writes. When that input comes in, that neural network can find patterns and context of what exactly that person is talking about in an instance.
On the other hand, the actual answering part, to response back to a human, is a task of another neural network that has been completely trained with human supervision. That means Open AI literally hired thousands and thousands of people, and then the outputs of the context understanding neural network will be the input to the response generating neural network. Here’s the catch, what the chatbot understands becomes the output because the input to the response-generating neural network and then the response-generating neural network will give out a certain response in text. That text has been read and judged and scored by humans, and these humans will basically put limitations on the chatbot’s answers by saying that it’s an illegal answer, you cannot say certain sentences or words, and sometimes it is an entirely false answer.
There are also other cases where ChatGPT would give an answer on how to murder another human being, and the human training the bot would be like you don’t have morals, so you cannot give these kinds of answers – “reject”. With this supervision of the response generation neural network, they are able to learn morals and ethics and are capable of putting out human-like responses.
It’s important to understand that there are by large two neural networks going on inside ChatGPT.
ChatGPT’s Unsupervised Learning
The current state of chatGPT for unsupervised learning – the understanding of input part that neural network takes in like all the text Data from the internet up to year 2021, and trains that neural network by dumping all these blog posts and all the text, took about one year, which is crazy.
This training program basically is tweaking the connections and activations of all the neurons in this neural network that is supposed to understand the input endlessly over the course of 1 year.
The actual response-generating part, where the human judges are involved took about six months.
So, once these two parts of the neural networks are trained and complete, it serves the customers for a period of time until a new version is released, as they are already working on ChatGPT-4. They claim that they are going to use way more connections and neurons, which will make the AI way smarter, and it will make it more sophisticated.
In conclusion, current AI is very rigid, because of the way the simulated neurons work, it is very fixed and rigid, it consumes a lot of energy. Compared to human beings and their intelligence current AI is not something we should be fearing, but rather we should be exploring and using it to its maximum potential.
Related Articles:
Gain Internet Access with ChatGPT | Use This Web Extension
The Avatar Creating AI Tool: Use Stable Diffusion for Free
Google’s $100 Billion Loss: Bard’s Billions of Dollars Error
Microsoft’s $10 Billion Dollar Deal OFF? New AI Powered Bing