The Dangerous Fallacy of Artificial Intelligence

Artificial Intelligence (AI) is now the hottest subject since sliced bread, and companies like Nvidia are selling chips faster than Frito Lay. The recent AI woke debacle by Google’s Gemini is only the tip of the iceberg and its unhinged progressive vibe is a mere reflection of the stupidity that inhabits the corporate world in its quest for DEI — Diversity, Equity, Inclusion. As a quick side note, DEI is the latest recipe to attempt to elevate racial and ethnic segments of the population that would fail to excel solely based on their biological aptitude and meritocracy. If that wasn’t the case, then DEI wouldn’t be necessary, and every community and country on Earth would be on the same socio-techno-economic level by now, regardless of demographics.

To illustrate the extent to which Google went to erase history and the truth, these are pictures generated by Gemini when asked for an image of a pope and a Viking. It’s unnecessary to explain how incorrect the pictures are but it illustrates how deviant and desperate for validation the unintelligent people in charge of these corporations truly are.

Ultimately, it’s not about the human subspecies that are protected and intellectually elevated, but rather about those perpetrating the lies, because they want to be viewed as Gods and saviors of the less competent while being aware of their own inferiority. That’s the key driver and these people are detrimental to society.

Correcting the algorithms will never erase the intent and inbred stupidity of all those involved. This problem is not fixable unless the people are dismissed, Twitter style, and never employed again, anywhere, in a position of power or influence. The core issue is pure and simple envy of accomplished white people.

Let’s reiterate that AI is nothing more than automation on steroids, not intelligence, and a black Pope or Viking is the least of our problems because the errors can be easily identified today. It’s a different story if the lies persist and how they will affect future generations unaware of the fraud. As an added illustration, here’s the poor attempt by AI to correct spelling in a simple transposition of two letters in a very common word as this post was typed (underlined above). Where’s the “they” option?

The question below was posed to ChatGPT and summarily describes what Artificial Intelligence truly is, which was surprisingly truthful by highlighting the human input, and the fact that “it is unlikely to achieve true human-like intelligence anytime soon.”

Now let’s give credit where credit is due and appreciate that the AI tools that we’ve seen are mesmerizing and produce creative and impressive output. But it’s not intelligence.

But apart from the Marxist/Liberal/Progressive disease that inhabits the halls of Silicon Valley and many corporations throughout America, the AI problems are worse than what people think.

We already know that it’s rather easy to manipulate the output, but how are these Artificial Intelligence LLMs — Large Language Models — being trained? There’s the catch, and the dangers are monumental because the training hinges on available information. This takes us to academia, a largely fraudulent social disease that produces reams and reams of pure scientific garbage that is being used to train AI. Here’s a prime example of retractions reported by Nature for 2023 alone, and it merely scratches the surface of the problem. Now imagine how many more erroneous papers — peer reviewed, no less — are floating out there and still being accessed by AI.

Another recent retraction by a Harvard associated organization made the news, but plenty will never be reported.

The Dana-Farber Cancer Institute initiated retractions or corrections to 37 papers authored by four senior researchers following allegations of data falsification, a DFCI research integrity officer said on Sunday.

Here’s another example of “incorrect” information being disseminated on a large scale that the great majority of society is unaware of, as indicated by Richard Horton for the Lancet, while people hide behind PhDs and other somewhat worthless titles.

Lastly, a lawyer got in trouble for relying on fake cases generated by ChatGPT.

AI may have a future in Hollywood where there aren’t boundaries to creativity, fiction, lies and outright fraud, but its existence beyond that domain is far worse than a mere nuisance.

The dangerous fallacy of Artificial Intelligence is summarized as follows: Believing and relying on a system that continually feasts and digests mega tons of information provided by super fraudulent academia and the scientific community where truths are indistinguishable from lies, and then produces output that people and society will trust without a second thought.

Imagine a system that tells us that the cure for disease X is medicine Y and then you die after taking the jab. Guess we have already seen that one and AI was not the culprit — as far as we know. As it stands, we barely comprehend human biology as proven by the pharmaceutical and medical industries, never mind a plethora of other fields, including human intelligence — artificial or not. Worse yet, people will be led to believe in the power and wisdom of AI while AI is nothing more than the man behind the curtain used to control the population to the benefit of a few vile human beings.


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.