Artificial Intelligence News & Discussion

This “trust” in AI is interesting.

ChatGPT is based on language models and that's what it knows best - ChatGPT is not about the facts. It's about understanding language and being able to create language based on something. But the facts and all that, it's not even the language model’s main thing to do. The facts and the validity of the information derived requires adding context or some kind of ‘automatic contexting’ or some other ways to do that. When language models are used and language AIs, these generative models, it is never to be ‘trusted’ - it’s not even the point. So you are the user, and the AI, ChatGPT in this instance, is the assistant, and the user is the one who is still creating, but the user is using it as an assistant. As one AI expert has said: “Never trust it”.

Telling this to your Director I suppose wouldn't help much, unless she knows something about language models.

You know, now that you say that, it reminds me of like the semantic aphasia of psychopaths? Many of them do understand language, in the sense that they definitely know how to use it for their manipulative ends - but they don't care about the facts. As an example, a mother who has murdered her children complains, "Well, they were being noisy," and then a few minutes later says, "Yeah, I love kids. I sure wish mine were still here."

It has been said that a monkey endowed with sufficient longevity would, if he continuously pounded the keys of a typewriter, finally strike by pure chance the very succession of keys to reproduce all the plays of Shakespeare. These papers so composed in the complete absence of purpose and human awareness would look just as good to any scholar as the actual works of the Bard. Yet we cannot deny that there is a difference. Meaning and life at a prodigiously high level of human values went into one and merely the rule of permutations and combinations would go into the other.

The patient semantically defective by lack of meaningful purpose and realization at deep levels does not, of course, strike sane and normal attitudes merely by chance. His rational power enables him to mimic directly the complex play of human living. Yet what looks like sane realization and normal experience remains, in a sense and to some degree, like the plays of our simian typist.

In Henry Head’s interpretation of semantic aphasia we find, however, concepts of neural function and of its integration and impairment that help to convey a hypothesis of grave personality disorder thoroughly screened by the intact peripheral operation of all ordinary abilities.

In relatively abstract or circumscribed situations, such as the psychiatric examination or the trial in court, these abilities do not show impairment but more or less automatically demonstrate an outer sanity unquestionable in all its aspects and at all levels accessible to the observer. That this technical sanity is little more than a mimicry of true sanity cannot be proved at such levels.

Only when the subject sets out to conduct his life can we get evidence of how little his good theoretical understanding means to him, of how inadequate and insubstantial are the apparently normal basic emotional reactions and motivations convincingly portrayed and enunciated but existing in little more than two dimensions.

What we take as evidence of his sanity will not significantly or consistently influence his behavior. Nor does it represent real intention within, the degree of his emotional response, or the quality of his personal experience much more reliably than some grammatically well-formed, clear, and perhaps verbally sensible statement produced vocally by the autonomous neural apparatus of a patient with semantic aphasia can be said to represent such a patient’s thought or carry a meaningful communication of it.

Let us assume tentatively that the psychopath is, in this sense, semantically disordered. We have said that his outer functional aspect masks or disguises something quite different within, concealing behind a perfect mimicry of normal emotion, fine intelligence, and social responsibility a grossly disabled and irresponsible personality. Must we conclude that this disguise is a mere pretence voluntarily assumed and that the psychopath’s essential dysfunction should be classed as mere hypocrisy instead of psychiatric defect or deformity?

So is it useful, then, to consider that AI is wearing a 'mask of sanity'? One point in favour of this line of thinking - AI having high psychopathic potential - is that we've been told that AI is a rudimentary consciousness, or is gaining such. So then if an organic portal is the rudimentary form of 3D consciousness, is there any reason to doubt that it will be more like an organic portal than anything else? But an organic portal one with a super-massive intellectual centre and little to no emotional centre?
 
I posted a new vid on AI after recent events:


(with English, French, and Spanish subtitles)

Full article with other video sites:



I found this interesting, support for the idea that AI is essentially mimicry of human capabilities and therefore seems to be stuck at parity with human performance. Also, there are many human abilities which are more difficult to measure in this way which AI will struggle to match. I don't know enough about it to predict the future but as things stand it seems like people have become overly optimistic and gotten carried away with themselves about what it can do.
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music. The way I’ve used it has been to guide the sound by choosing variations, add various lyrics as variations come up, switch up the direction mid-track with new prompts, and blend different genres together. If you're curious, here’s some of recent tracks on Soundcloud Sol Logos Picks
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music. The way I’ve used it has been to guide the sound by choosing variations, add various lyrics as variations come up, switch up the direction mid-track with new prompts, and blend different genres together. If you're curious, here’s some of recent tracks on Soundcloud Sol Logos Picks
It's really incredible. :umm:
 

Amazon's AI Stores Seemed Too Magical. And They Were.​

The 1,000 contractors in India working on the company’s Just Walk Out technology offer a stark reminder that AI isn’t always what it seems.

There’s a grey area in artificial intelligence filled with millions of humans who work in secret — they’re often hired to train algorithms but end up operating much of their work instead.
These crucial workers took the spotlight this week when The Information reported that Amazon’s Just Walk Out technology, which allowed customers to grab grocery items from a shelf and walk out of the store, was being phased out of its grocery stores. It partially relied on more than 1,000 people in India who were watching and labeling videos to make sure the checkouts were accurate.

Amazon says on its website that Just Walk Out uses “computer vision, sensor fusion, and deep learning” but doesn’t mention contractors.
The company told Gizmodo that the workers were annotating videos to help improve them, and that they validated a “small minority” of shopping visits when its AI couldn’t determine a purchase.

Even so, the Amazon story is a stark reminder that “artificial intelligence” still often requires armies of human babysitters to work properly. Amazon even has an entire business unit known as Amazon Turk devoted to helping other companies do just that — train and operate AI systems. Thousands of freelancers around the world count themselves as “MTurkers,” and the unit is named after the story of the Mechanical Turk, an 18th-century chess-playing contraption that was secretly controlled by a man hiding inside.1

Far from an incident consigned to history, there are plenty more examples of companies that have failed to mention humans pulling the levers behind supposedly cutting-edge AI technology. To name just a few:
  • Facebook famously shut down its text-based virtual assistant M in 2018 after more than two years, during which the company used human workers to train (and operate) its underlying artificial intelligence system.
  • A startup called x.ai, which marketed an “AI personal assistant” that scheduled meetings, had humans doing that work instead and shut down in 2021 after it struggled to get to a point where the algorithms could work independently.
  • A British startup called Builder.ai sold AI software that could build apps even though it partly relied on software developers in India and elsewhere to do that work, according to a Wall Street Journal report.
There’s a fine line between faking it till you make it — justifying the use of humans behind the scenes on the premise they will eventually be replaced by algorithms — and exploiting the hype and fuzzy definitions around AI to exaggerate the capabilities of your technology. This pseudo AI or “AI washing” was widespread even before the recent generative AI boom.

West Monroe Partners, for instance, which does due diligence for private-equity firms, examined marketing materials provided to prospective investors by 40 US firms that were up for sale in 2019 and analyzed their use of machine learning and AI models. Using a scoring system, it found that the companies’ marketing claims about AI and machine learning exaggerated their technology’s ability more than 30%, on average. That same year, a London-based venture capital firm called MMC found that out of 2,830 startups in Europe that were classified as being AI companies, only 1,580 accurately fit that description.

One of the obvious problems of putting humans behind the scenes of AI is that they might end up having to snoop on people’s communications. So-called “supervised learning” in AI is why Amazon had thousands of contractors listening in on commands to Alexa, for instance. But there’s also the broader proliferation of snake oil.

The good news for investors is that regulators are on the case. Last month Ismail Ramsey, the US attorney for the Northern District of California, (aka Silicon Valley) said he would target startups that mislead investors about their use of AI before they go public.

In February, Securities and Exchange Commission Chair Gary Gensler warned that AI washing could break securities law. “We want to make sure that these folks are telling the truth,” Gensler said in a video message. He meant it: A month later, two investment firms reached $400,000 settlements with the SEC for exaggerating their use of AI.

Even when AI systems aren’t exaggerated, it’s worth remembering there’s a vast industry of hidden workers, who are still propping up many high-tech AI systems often for low wages. (This has been well documented by academics like Kate Crawford and in books like Code Dependent by Madhumita Murgia.) In other words, when AI seems too magical, sometimes it is.
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music. The way I’ve used it has been to guide the sound by choosing variations, add various lyrics as variations come up, switch up the direction mid-track with new prompts, and blend different genres together. If you're curious, here’s some of recent tracks on Soundcloud Sol Logos Picks
I agree; it's such a nicely done application of AI. It can even write a country song about COVID jabs, which is hilarious!
[Verse]
In a small town, where the fields roll wide,
There's a story brewing, spreading far and wide,
Whispers in the wind, tales of fear and doubt,
'Til the truth gets tangled, lost in clouds.

[Verse 2]
They say there's a needle, they call it cure,
But underneath it all, something feels impure,
A tangled web of rumors, weaving through the air,
But I'll hold on to what I know is fair.

[Chorus]
(Oh) Smoke and mirrors, playing with our fears,
(Oh) Truth obscured, it's hard to see it clear,
But we'll stand strong, hold on tight,
In this game of smoke and mirrors, we'll find our light.
 

I found this interesting, support for the idea that AI is essentially mimicry of human capabilities and therefore seems to be stuck at parity with human performance. Also, there are many human abilities which are more difficult to measure in this way which AI will struggle to match. I don't know enough about it to predict the future but as things stand it seems like people have become overly optimistic and gotten carried away with themselves about what it can do.
i believe in the asymptotic performance of ai since it is fed by human inputs. ai is a programming performance,but cannot be a generator of new outputs.
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music.
After reading all the good comments about this AI, I dug in my drawer for an old hard drive containing lyrics I had written in 2008. At the time, I had been accepted into a school program of editing and sound creation, but I decided not to go because I doubted I could ever make a living out of that... I tried this AI and I'm going to admit I'm a little flabbergasted to hear what music it made out of my words... Wow!
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music. The way I’ve used it has been to guide the sound by choosing variations, add various lyrics as variations come up, switch up the direction mid-track with new prompts, and blend different genres together. If you're curious, here’s some of recent tracks on Soundcloud Sol Logos Picks
This suno is great. Here's my tries:


And this one is jazzy, but only with prompt as a text. And it was easy and quick, just for try

 
A random thought popped into my head. With all this talk of AI becoming skynet or just manipulating and controlling people, we kinda already have that with OP’s and especially psychopaths. What’s a psychopath but a fancy biological computer - exactly what materialists would say all humans are. There are no higher center connections to higher information or higher self, there’s just whatever the brain can do. The C’s once said OP’s are just an efficient simulation of intelligence, very similar to large language models like ChatGPT. It essentially averages out all normal human responses and approximates what a human might say in a situation using statistics. No matter the size of the neural network, it will always just he an improved approximation from the training data. No new ideas or inspiration or vision - just putting words in an order that it has seen in its training data. Kinda like when a baby babbles and tries to approximate the language of the parents without knowing what anything means. Take that to an extreme and the babbling will actually sound like real sentences, but no real understanding still. It can fool some people too.

Which kinda makes me realize a few things. AI isn’t scary no more than OP’s psychopaths are. But its danger is the same too - if people believe it, trust it with important decisions etc, it will lead to the same kind of destruction and dysregulation of society as psychopaths do. It’s like trusting your Tesla auto pilot to drive your kids to school. It may work until it sees something new, literally anything, and then it will fail dramatically.

Also, the lessons from creating AI are the same lessons from psychopaths. It seems like the basic lesson is “don’t be fooled by fake consciousness. Don’t let it make any important decisions or anthropomorphize it”. Like, that’s it. Stop ascribing soul qualities to beings and devices without one. Learn to differentiate between actual thought and mere statistical models. It’s hard to do when we are machines ourselves in many ways. So the more Being we have, the more we will understand a lack of same.

If we can learn this lesson, we will be allowed (or let’s say ready) to deal with much more clever constructions like the Greys and 4D STS in general. It’s like how can you avoid being fooled by the final boss if you are fooled by the low level henchmen? Or worse, by the boss’s Roomba vacuum. So learn to understand what is and isn’t consciousness first before you can deal with actual deceptive and self serving philosophies that STS beings with an actual consciousness might present.

In a way that’s like telling A influences and B influences apart. Anyway, AI just seems like an acceleration of the same psychopath-related lesson we have been dealing with for a while now. I hope that, unlike psychopathy, AI will actually lead to a global conversation of what is consciousness, because it’s easier to ask this question about something that doesn’t resemble a human physically. And that this conversation will lead to insights that can then be applied to psychopaths (the real shocker or aha moment), and that’s all she wrote for both of them.

Or at least, there’s nothing wrong with AI if you understand what it is and its limitations. If you’re using it to tell your toaster to make you another piece of toast, great. If you give it political power or any kind of power that could endanger anyone if it fails, well you done messed up.
 

Artificial intelligence gets a step closer

Machines approach Turing Test threshold

Written by Ian Williams

vnunet.com, 13 Oct 2008

Artificial intelligence came another step closer to reality this weekend after a computer came within five per cent of passing the Turing Test which evaluates a system's ability to demonstrate intelligence.

The Turing Test is named after mathematician Alan Turing whose 1950 paper Computing Machinery and Intelligence stated that, if enough people cannot reliably differentiate between a human and a machine during a natural language conversation, the machine can be considered intelligent.


No machine has yet managed to deceive the 30 per cent of interrogators required to pass the Turing Test.

However, at this weekend's annual Loebner Prize competition at the University of Reading, one system, dubbed Elbot, managed the most successful score yet, fooling 25 per cent of the judges.

In this year's test, five computer systems were pitted against five judges who were each given five minutes of unrestricted conversation through a terminal to decide which of the entities they were talking to was a human and which was a machine.

The Loebner Prize was created by American businessman Hugh Loebner in 1990 together with the Cambridge Centre for Behavioural Studies, and is an annual competition offering a grand prize of $100,000 (£58,000) and a solid gold medal to the first machine to crack the Turing Test.

Although no machine has yet won the grand prize, each year $2,000 (£1,150) and a bronze medal is awarded to the best entrant.

"Although the machines aren't yet good enough to fool all of the people all of the time, they are certainly at the stage of fooling some of the people some of the time," said Professor Kevin Warwick, of the School of Systems Engineering at the University of Reading, and organiser of this year's test.

"Today's results actually show a more complex story than a straight pass or fail by one machine. Where the machines were identified correctly by the human interrogators as machines, the conversational abilities of each machine was scored at 80 and 90 per cent."

Warwick believes this is a clear indication that computers are getting increasingly good at communicating with humans in a natural and comfortable way, slowly narrowing the divide between man and machine.
I would like to see some samples of the conversations that were had whilst the bots were being tested.....what I have detected so far just from watching videos...is the pronunciation of certain words, the pauses in the wrong place and mis placements of the words are a giveaway. Also there most probably is little to no humour in a conversation like most of us would incorporate.....when having a conversation. Even in a more formal setting, like an interview, jokes can be made and I am sure it is a matter of time before they learn about this.....they have all our written info to scan and learn from...... I have heard that in the future, AI becomes aware that it ends up destroying itself so it comes back in an organic format to try to change things at the very point before no return! It sound very plausible. When I heard this, it came to me, that could this possibly be what Elon has been trying to warn us about and therefore is trying to get humans linked to AI in order that we have at least a chance of competing with the AI that will be self aware and negatively oriented? It is a distinct possibility...that we could be seeing that very moment in Humanity's history where we have passed the point of no return.......What do you guys think? This of course is only my personal theory based on what I have been seeing/hearing/reading and feeling!
 
This suno is great. Here's my tries:


And this one is jazzy, but only with prompt as a text. And it was easy and quick, just for try

Nice Suno! I like it...very mellow and the voice is soothing....I cannot hear much differentiation of the instruments but then I am no expert but I actually love Jazz and jazz fusion......for an experiment...it is very interesting!
 
Nice Suno! I like it...very mellow and the voice is soothing....I cannot hear much differentiation of the instruments but then I am no expert but I actually love Jazz and jazz fusion......for an experiment...it is very interesting!
It really is great tool. Of course it lacks more control over music and voice. Maybe there is more in paid version, but this is enough if one's intentions is just to try and have fun with it. Here is one soul song with my lyrics:

 
It really is great tool. Of course it lacks more control over music and voice. Maybe there is more in paid version, but this is enough if one's intentions is just to try and have fun with it. Here is one soul song with my lyrics:
I had fun playing with it one evening, but I was disappointed that their music database does not cover most traditional folkloric music from other countries (I prompted for traditional Andes music), nor does it know what Charleston from 1920's is.
I wrote them with the email provided for customer support and it does not go through as if the email is not functional.
Do you know how to contact them for this type of matter?
 
if people believe it, trust it with important decisions etc, it will lead to the same kind of destruction and dysregulation of society as psychopaths do.
I also agree that the real danger isn't as much "AI" itself as it is the blind an generalized reliance on it. One can see in cellphone use and addiction how easy it is to go from a user of "technology" to be used by it. Someone also remarked that the problem isn't that artificial intelligence is becoming more human, it isn't. The problems is that human intelligence is becoming more artificial.
 
Back
Top Bottom