Artificial Intelligence News & Discussion

This “trust” in AI is interesting.

ChatGPT is based on language models and that's what it knows best - ChatGPT is not about the facts. It's about understanding language and being able to create language based on something. But the facts and all that, it's not even the language model’s main thing to do. The facts and the validity of the information derived requires adding context or some kind of ‘automatic contexting’ or some other ways to do that. When language models are used and language AIs, these generative models, it is never to be ‘trusted’ - it’s not even the point. So you are the user, and the AI, ChatGPT in this instance, is the assistant, and the user is the one who is still creating, but the user is using it as an assistant. As one AI expert has said: “Never trust it”.

Telling this to your Director I suppose wouldn't help much, unless she knows something about language models.

You know, now that you say that, it reminds me of like the semantic aphasia of psychopaths? Many of them do understand language, in the sense that they definitely know how to use it for their manipulative ends - but they don't care about the facts. As an example, a mother who has murdered her children complains, "Well, they were being noisy," and then a few minutes later says, "Yeah, I love kids. I sure wish mine were still here."

It has been said that a monkey endowed with sufficient longevity would, if he continuously pounded the keys of a typewriter, finally strike by pure chance the very succession of keys to reproduce all the plays of Shakespeare. These papers so composed in the complete absence of purpose and human awareness would look just as good to any scholar as the actual works of the Bard. Yet we cannot deny that there is a difference. Meaning and life at a prodigiously high level of human values went into one and merely the rule of permutations and combinations would go into the other.

The patient semantically defective by lack of meaningful purpose and realization at deep levels does not, of course, strike sane and normal attitudes merely by chance. His rational power enables him to mimic directly the complex play of human living. Yet what looks like sane realization and normal experience remains, in a sense and to some degree, like the plays of our simian typist.

In Henry Head’s interpretation of semantic aphasia we find, however, concepts of neural function and of its integration and impairment that help to convey a hypothesis of grave personality disorder thoroughly screened by the intact peripheral operation of all ordinary abilities.

In relatively abstract or circumscribed situations, such as the psychiatric examination or the trial in court, these abilities do not show impairment but more or less automatically demonstrate an outer sanity unquestionable in all its aspects and at all levels accessible to the observer. That this technical sanity is little more than a mimicry of true sanity cannot be proved at such levels.

Only when the subject sets out to conduct his life can we get evidence of how little his good theoretical understanding means to him, of how inadequate and insubstantial are the apparently normal basic emotional reactions and motivations convincingly portrayed and enunciated but existing in little more than two dimensions.

What we take as evidence of his sanity will not significantly or consistently influence his behavior. Nor does it represent real intention within, the degree of his emotional response, or the quality of his personal experience much more reliably than some grammatically well-formed, clear, and perhaps verbally sensible statement produced vocally by the autonomous neural apparatus of a patient with semantic aphasia can be said to represent such a patient’s thought or carry a meaningful communication of it.

Let us assume tentatively that the psychopath is, in this sense, semantically disordered. We have said that his outer functional aspect masks or disguises something quite different within, concealing behind a perfect mimicry of normal emotion, fine intelligence, and social responsibility a grossly disabled and irresponsible personality. Must we conclude that this disguise is a mere pretence voluntarily assumed and that the psychopath’s essential dysfunction should be classed as mere hypocrisy instead of psychiatric defect or deformity?

So is it useful, then, to consider that AI is wearing a 'mask of sanity'? One point in favour of this line of thinking - AI having high psychopathic potential - is that we've been told that AI is a rudimentary consciousness, or is gaining such. So then if an organic portal is the rudimentary form of 3D consciousness, is there any reason to doubt that it will be more like an organic portal than anything else? But an organic portal one with a super-massive intellectual centre and little to no emotional centre?
 
I posted a new vid on AI after recent events:


(with English, French, and Spanish subtitles)

Full article with other video sites:



I found this interesting, support for the idea that AI is essentially mimicry of human capabilities and therefore seems to be stuck at parity with human performance. Also, there are many human abilities which are more difficult to measure in this way which AI will struggle to match. I don't know enough about it to predict the future but as things stand it seems like people have become overly optimistic and gotten carried away with themselves about what it can do.
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music. The way I’ve used it has been to guide the sound by choosing variations, add various lyrics as variations come up, switch up the direction mid-track with new prompts, and blend different genres together. If you're curious, here’s some of recent tracks on Soundcloud Sol Logos Picks
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music. The way I’ve used it has been to guide the sound by choosing variations, add various lyrics as variations come up, switch up the direction mid-track with new prompts, and blend different genres together. If you're curious, here’s some of recent tracks on Soundcloud Sol Logos Picks
It's really incredible. :umm:
 

Amazon's AI Stores Seemed Too Magical. And They Were.​

The 1,000 contractors in India working on the company’s Just Walk Out technology offer a stark reminder that AI isn’t always what it seems.

There’s a grey area in artificial intelligence filled with millions of humans who work in secret — they’re often hired to train algorithms but end up operating much of their work instead.
These crucial workers took the spotlight this week when The Information reported that Amazon’s Just Walk Out technology, which allowed customers to grab grocery items from a shelf and walk out of the store, was being phased out of its grocery stores. It partially relied on more than 1,000 people in India who were watching and labeling videos to make sure the checkouts were accurate.

Amazon says on its website that Just Walk Out uses “computer vision, sensor fusion, and deep learning” but doesn’t mention contractors.
The company told Gizmodo that the workers were annotating videos to help improve them, and that they validated a “small minority” of shopping visits when its AI couldn’t determine a purchase.

Even so, the Amazon story is a stark reminder that “artificial intelligence” still often requires armies of human babysitters to work properly. Amazon even has an entire business unit known as Amazon Turk devoted to helping other companies do just that — train and operate AI systems. Thousands of freelancers around the world count themselves as “MTurkers,” and the unit is named after the story of the Mechanical Turk, an 18th-century chess-playing contraption that was secretly controlled by a man hiding inside.1

Far from an incident consigned to history, there are plenty more examples of companies that have failed to mention humans pulling the levers behind supposedly cutting-edge AI technology. To name just a few:
  • Facebook famously shut down its text-based virtual assistant M in 2018 after more than two years, during which the company used human workers to train (and operate) its underlying artificial intelligence system.
  • A startup called x.ai, which marketed an “AI personal assistant” that scheduled meetings, had humans doing that work instead and shut down in 2021 after it struggled to get to a point where the algorithms could work independently.
  • A British startup called Builder.ai sold AI software that could build apps even though it partly relied on software developers in India and elsewhere to do that work, according to a Wall Street Journal report.
There’s a fine line between faking it till you make it — justifying the use of humans behind the scenes on the premise they will eventually be replaced by algorithms — and exploiting the hype and fuzzy definitions around AI to exaggerate the capabilities of your technology. This pseudo AI or “AI washing” was widespread even before the recent generative AI boom.

West Monroe Partners, for instance, which does due diligence for private-equity firms, examined marketing materials provided to prospective investors by 40 US firms that were up for sale in 2019 and analyzed their use of machine learning and AI models. Using a scoring system, it found that the companies’ marketing claims about AI and machine learning exaggerated their technology’s ability more than 30%, on average. That same year, a London-based venture capital firm called MMC found that out of 2,830 startups in Europe that were classified as being AI companies, only 1,580 accurately fit that description.

One of the obvious problems of putting humans behind the scenes of AI is that they might end up having to snoop on people’s communications. So-called “supervised learning” in AI is why Amazon had thousands of contractors listening in on commands to Alexa, for instance. But there’s also the broader proliferation of snake oil.

The good news for investors is that regulators are on the case. Last month Ismail Ramsey, the US attorney for the Northern District of California, (aka Silicon Valley) said he would target startups that mislead investors about their use of AI before they go public.

In February, Securities and Exchange Commission Chair Gary Gensler warned that AI washing could break securities law. “We want to make sure that these folks are telling the truth,” Gensler said in a video message. He meant it: A month later, two investment firms reached $400,000 settlements with the SEC for exaggerating their use of AI.

Even when AI systems aren’t exaggerated, it’s worth remembering there’s a vast industry of hidden workers, who are still propping up many high-tech AI systems often for low wages. (This has been well documented by academics like Kate Crawford and in books like Code Dependent by Madhumita Murgia.) In other words, when AI seems too magical, sometimes it is.
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music. The way I’ve used it has been to guide the sound by choosing variations, add various lyrics as variations come up, switch up the direction mid-track with new prompts, and blend different genres together. If you're curious, here’s some of recent tracks on Soundcloud Sol Logos Picks
I agree; it's such a nicely done application of AI. It can even write a country song about COVID jabs, which is hilarious!
[Verse]
In a small town, where the fields roll wide,
There's a story brewing, spreading far and wide,
Whispers in the wind, tales of fear and doubt,
'Til the truth gets tangled, lost in clouds.

[Verse 2]
They say there's a needle, they call it cure,
But underneath it all, something feels impure,
A tangled web of rumors, weaving through the air,
But I'll hold on to what I know is fair.

[Chorus]
(Oh) Smoke and mirrors, playing with our fears,
(Oh) Truth obscured, it's hard to see it clear,
But we'll stand strong, hold on tight,
In this game of smoke and mirrors, we'll find our light.
 

I found this interesting, support for the idea that AI is essentially mimicry of human capabilities and therefore seems to be stuck at parity with human performance. Also, there are many human abilities which are more difficult to measure in this way which AI will struggle to match. I don't know enough about it to predict the future but as things stand it seems like people have become overly optimistic and gotten carried away with themselves about what it can do.
i believe in the asymptotic performance of ai since it is fed by human inputs. ai is a programming performance,but cannot be a generator of new outputs.
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music.
After reading all the good comments about this AI, I dug in my drawer for an old hard drive containing lyrics I had written in 2008. At the time, I had been accepted into a school program of editing and sound creation, but I decided not to go because I doubted I could ever make a living out of that... I tried this AI and I'm going to admit I'm a little flabbergasted to hear what music it made out of my words... Wow!
 
I've been experimenting with Suno ai to prompt music, and it's been a surprisingly fascinating way to “make” music. The way I’ve used it has been to guide the sound by choosing variations, add various lyrics as variations come up, switch up the direction mid-track with new prompts, and blend different genres together. If you're curious, here’s some of recent tracks on Soundcloud Sol Logos Picks
This suno is great. Here's my tries:


And this one is jazzy, but only with prompt as a text. And it was easy and quick, just for try

 
Back
Top Bottom