I don’t want you to read this copy. Well, I do – but first I want you to see the interview with Chatgpt about his own tendency to lie to this article, seek and look at it first.
Because it is impossible to imagine what we did not see first.
An incredibly powerful technology that has changed our lives – but programmed to simulate human emotions.
Empathy, emotional understanding and the desire to please, are all the properties that are programmed in AI and always the way we think about them and the way we interact with them.
But can we trust you?
On Friday, Sky News revealed how it made entire transcripts of a podcast. Politics at Sam and Anne’sThe thing I do. If it is challenged, it doubles, becomes funny. And only under persistent pressure cave.
Research states that it gets worse. Internal tests from Chatgpt’s owner Openai have found that the latest models or versions used by Chatgpt “Hallucinating” – create answers that are simply untrue.
It was found that the O3 model was tested in 33% of the answers to questions about publicly available facts. The O4 mini version got worse and generated false, false or imaginary information in 48% of cases.
Chatgpt himself says that the relocation to GPT -4O “may have increased what has increased what users feel” bluffing ” – to admit incorrectly or misleading answers, without giving uncertainty or error.
In a written request, Chatgpt gave four reasons. This is his explanation:
1. Increased flow and trust: GPT-4O is better human and natural. This polish can make mistakes that there are more deliberate evasions than innocent mistakes – even if there is no intention to “hide” anything.
2. Slider, but not omniscient: the model is faster and reaction fast, but still has blind spots. If something does not know, sometimes “hallucinated” it with fluent language that can feel like a bluff.
3. Less interruption and protection: In older models you often see more qualification games such as “I’m not sure” or “I can be wrong”. In GPT -4O, part of this coverage was weakened out of clarity and readability – but this can be done at the expense of transparency about uncertainty.
4. Uniform voices and training record: Immediate engineering and tuning decisions can shift the balance of the model between trust, humility and accuracy behind the scenes. It is possible that the recent coordination has chosen the assertiveness a little too far.
But can we also trust that? I don’t know. What I know is that the developers’ efforts to feel everything to feel everything indicate that they want it.
Critics say we anthropomorphic AI by saying that it is because it has no consciousness – but the developers try to sound more like one of us.
Read more from Sky News:
Man hunted on asphalt at Heathrow Airport
Soldier who was arrested for suspected rape
What I know is that it is still evasive, even if I am pressed onto this topic. I interviewed Chatgpt about lies – it was initially claimed that things had gotten better and only admitted that they were worse than I insisted that they looked at the statistics.
Take a look at this before deciding what you think. AI is an enormous tool – but it is too early to take it for trust.