Google's Gemini, Salt & Light, AI
As an insider in LLM training, I know how Google's faux pax happened.
Jesus said, “You are the salt of the earth… you are the light of the world.” Matthew 5:13-14.
You may have heard the uproar this week as the public discovered that Gemini, Google’s newest Large Language Model (LLM), what you know as Artificial Intelligence (AI), miserably failed when asked historical questions.
When you go on your computer and ask Artificial Intelligence models (e.g. Gemini) a question, that question is called a “prompt.” When AI gives you an answer, that answer is called a “response.”
Here are some responses from Gemini to historical prompts about 1. U.S. Senators in the 1800s, 2. German soldiers in 1943, and 3. The Founding Fathers of the United States (source: Adi Robertson in The Verge).
An Apology from Google
The company apologized this week last Friday for its errors.
In a company statement, Google said:
“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions. We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
I write this post not to condemn Google.
I write this post to encourage Christians.
How Did Gemini Err?
Large Language Models (LLMs) such as Gemini have 'trainers’ that feed them billions of bites of information.
Trainers have biases.
Gemini answered a prompt that trainers from India, Pakistan, China, and other places worldwide input.
Observed bias motivated me to begin working as a trainer for Large Language Models (LLMs).
It’s been a fascinating journey for me the last few weeks…
Keep reading with a 7-day free trial
Subscribe to Wade Burleson at Istoria to keep reading this post and get 7 days of free access to the full post archives.