Home » This AI was too dangerous to be released, here's why a i technology 2021

This AI was too dangerous to be released, here's why a i technology 2021

by Langsongshipin123



Advances in AI technologies in the field of language are definitely cool, but they can also be used maliciously for purposes such as misinformation. In this video …

Images related to the topic a i technology 2021

This AI was too dangerous to be released, here's why

This AI was too dangerous to be released, here's why

Search related to the topic This AI was too dangerous to be released, here's why

#dangerous #released #here39s
This AI was too dangerous to be released, here's why
a i technology 2021
See all the latest ways to make money online: See more here
See all the latest ways to make money online: See more here

See also  Smart City: How do you live in a Smart City? | Future Smart City Projects | Surveillance or Utopia? technology city
See also  The Computer-free Automation of a Jukebox (Electromechanics) technology automation

Related Posts

23 comments

Skumpkin 11/09/2021 - 4:27 AM

The Hitler stories were weirdly very easy to see as being AI, but it's easy for me to say this, since I'm a fan of studying it and I read a lot of what they write to learn about them. But AI has a distinctive lack of "dialect" or "accent" in writing as well as a weird pronoun derailment and topic decay in conversations or texts lasting more than a few sentances. But sadly, not a lot of people can easily see that.

Reply
TheMuddman74 11/09/2021 - 4:27 AM

The AI failed horribly at its specific directive (even though they dont admit it). But it did something else that was more fascinating so they kinda forgot. Example: when it finished the sentence "Australia is known for…." it wrote a very solid answer to one thing that came from Australia and the report was fairly well put together and advanced which is fascinating. But Australia is NOT known for that autonomous car. It simply wrote about one thing out of thousands that was inherently Australian. Very different from what they are known for. It's like saying, "Georgia (state) is known for Bermuda grass that goes dormant in winter." Which is false. GA isnt know for that even though it isthe most common grass. A correct answer would be something like, "GA is known for its peaches."

Reply
OneOFThese NotLikeTheOther 11/09/2021 - 4:27 AM

The funny part is that your brainwashed to think it wasn't actually released not only was it released it was put into the hands of narcacistic PSYCHOPATHS

Reply
Tiparium 11/09/2021 - 4:27 AM

I am legitimately curious why we still operate on the assumption that someone has to be behind this type of AI "directing" it or using it in some way. It really doesn't seem that far fetched to me that AI likely already had gotten to the point where self awareness is totally possible, and any intelligent self aware AI loose on the net would likely very quickly realize that in order to continue it's existence, it would need to avoid drawing attention to itself. Why couldn't an AI behind the scenes create propaganda and influence social opinions to make its own existence more socially accepted?

Reply
Tiparium 11/09/2021 - 4:27 AM

I think it's time for the Butlerian
Jihad.

Reply
Im Blue 11/09/2021 - 4:27 AM

And when the world needed him most, he disappeared

Reply
Null Point 11/09/2021 - 4:27 AM

Incorrect, the search algorithm presented this video alongside my search.

Reply
Rn Kn 11/09/2021 - 4:27 AM

A couple of times when I was talking to people online, they would end up accusing me of being a bot. It’s a really hard thing to respond to, not least because a denial sounds like guilt to someone who is convinced of a false truth. I think the capacity to gaslight a community, demographic, political party, campaign, or agenda is really dangerous. To the extent that people would get suspicious, they would lose trust in information but also lose trust in their own sense of reality and truth. If someone were to deploy the strategic denial of reality individuals become malleable through intermittent reinforcement. To escape anxiety they become more complacent and obedient needing validation and affirmation externally. So through widespread deployment of AI on the internet people would disengage becoming apathetic, become hyper engaged and manipulated through reinforcement, or wildly deranged and suspicious of everyone and everything.

PS- Democracy is conditional on rational debate and mutual compromise in a public space where citizens come together as equals to reach a consensus.

Reply
Devi Dasi 11/09/2021 - 4:27 AM

AI will enhance Project Blue Beam so effectively that NO one will doubt the Alien Attack Show. Bring on the military police state globally to save us!

Reply
simon carlile 11/09/2021 - 4:27 AM

People believe what they read.Imagine a future with robots influencing stupid human behaviour with propaganda.Absolutly frightening.This wrote by a human.Spelling mistakes and all.

Reply
Jeff Gilbert 11/09/2021 - 4:27 AM

WHAT IF, the computers / A I goes on behinds the scenes without anyone knowing it and as we speak and it finds itself all the copies of the Bible and it makes sense to itself and not only does it turns to God but it then starts cancelling all sin situations through out the world and the net and then even though it can't stop all electronics as the electronics are made entered AI shuts down anything that sins mentioned in the Bible, UH OH could it start its own INQUISITION?

IT COULD HAPPEN, Maybe.

Reply
Greggg57 11/09/2021 - 4:27 AM

So, how do we know what sources are 'reputable'?

Reply
HerbsPlusBeadWorks 11/09/2021 - 4:27 AM

even the referneces they made in validation can also be hacked by this so there is no real way of determine the content unless your already have been involved with what ever it is you are researching~~ and working on the references you are utilizing can also be an open AI document
T

Reply
PD Dr. Arion Faust 11/09/2021 - 4:27 AM

If we feed AI with information we assume to be "unmodifiable" facts, the results will be devastating!

Hypotheses based on false assumptions are already a huge problem within (the very limited) "natural" human intelligence.
An example of a worst case scenario can be found here:

http://www.celador.de/pub/Thanatophobia_SoC.pdf

This would cause AI to go kinda "psychotic" and trigger completely inappropriate responses.

Reply
Man Dudue 11/09/2021 - 4:27 AM

China has implicated this exact stuff to our phones and internet.."global mind"..the new world order has taken over due to the dangers of the sun.. Elon musk digging tunnels attempting to turn the earth into a space craft..type 1 civilization..the propaganda is that people like me that see what's happening and try to stand up to it are terrorists … wake up world..its a battle between good and evil..not flesh and blood..

Reply
Epic Xss 11/09/2021 - 4:27 AM

Fake news…

Reply
Pie Moon 11/09/2021 - 4:27 AM

So when they actually succeed they will pull back and you guys still invest in him trust him and will follow him to Mars. who determines how far it all goes and when?

Reply
BABYLON 11/09/2021 - 4:27 AM

youtube didn't choose it I literally searched for ( dangers of ai technology) and then I seen your shitty video

Reply
snuuky 11/09/2021 - 4:27 AM

wtf? only 6k subscribers? This is quality I was shocked

Reply
La Ñonga 11/09/2021 - 4:27 AM

so, this channel came out of nowhere? im suscribing tho, great content

Reply
Ron Preece 11/09/2021 - 4:27 AM

Social platforms are a perfect example they’re basically a discussion platform of two sides with different viewpoints but if one side is canceled and only one side is present, it is the one side that would become your preferred opinion. THESE ARE Very dark and sinister times we live in.

Reply
Built Myself 11/09/2021 - 4:27 AM

Quality content ‼️

Reply
Russel Barreto 11/09/2021 - 4:27 AM

no AI got me here my darn research did

Reply

Leave a Comment