Artificial Intelligence News & Discussion

Here is the AI created ‘documentary’ called ‘Last stand’ by bright pen. The scenario, narrative and ‘outcome’-for humanity to think about.

 
Economist: Samsung's corporate data leaked online due to chat bot ChatGPT
Chatbot ChatGPT has become the cause of Samsung's corporate data leak. This is reported by the Korean edition of the Economist.

Sources in the company said that employees of the IT giant began using a chatbot in their work three weeks ago. As a result of the fact that the corporation's specialists incorrectly used ChatGPT, Samsung's confidential data was freely available.

The article says that in the first case, the engineer entered the source code concerning semiconductor equipment into the line of the chatbot with artificial intelligence. In the second case, another specialist also shared the secret source code with ChatGPT to simplify its verification — the data was lost.

The third leak occurred when a Samsung employee tried to create a meeting protocol. According to journalists, in all cases, the hidden information has become part of the knowledge base of artificial intelligence.

Samsung has taken measures to prevent a similar situation from happening again. The company warned employees not to share confidential information and data with the chatbot that could harm the corporation.

 
It occurred to me that the novelty in these LLMs like GPT3 and GPT4 etc., may lie in the introduction of "randomness". Some randomness in the latent representation in the generative models (for example for image generation) was what made them uncanny relatively to the previous generation of "fixed" neural models.
Now, I remember something about recording the voices of dead people (I tried with a cassette tape in a supposedly hunted house one night when I was 11 yrs old but it didn't yield anything). It appears that it works only with recorders that have some electronic noise, or in case of old cassette recorders without filtering, the magnetic noise on the tape. Modern digital recorders do not work. From D. Radin's experiments it appears also that consciousness (or unconsciousness) may influence quantum noise as well.
Now, these LLMs on their servers, or locally on one's computer, probably use pseudo-random number generators. They may appear to be random to us humans but at some deeper level they are not. However, at some point, someone somewhere will have the idea of using real random numbers, perhaps a quantum random number generator or just using the hardware's heat noise at used in cryptography. At that point it would become easier to some entity, ex-living human, demonic or whatever, or even an amalgamation of living human consciousnesses, to inhabit these machines and communicate through these models. A less easy way would be to use cosmic rays if the servers are distributed on a large enough area but that's another mechanism altogether. Just a few random thoughts.
 
It occurred to me that the novelty in these LLMs like GPT3 and GPT4 etc., may lie in the introduction of "randomness". Some randomness in the latent representation in the generative models (for example for image generation) was what made them uncanny relatively to the previous generation of "fixed" neural models.
Now, I remember something about recording the voices of dead people (I tried with a cassette tape in a supposedly hunted house one night when I was 11 yrs old but it didn't yield anything). It appears that it works only with recorders that have some electronic noise, or in case of old cassette recorders without filtering, the magnetic noise on the tape. Modern digital recorders do not work. From D. Radin's experiments it appears also that consciousness (or unconsciousness) may influence quantum noise as well.
Now, these LLMs on their servers, or locally on one's computer, probably use pseudo-random number generators. They may appear to be random to us humans but at some deeper level they are not. However, at some point, someone somewhere will have the idea of using real random numbers, perhaps a quantum random number generator or just using the hardware's heat noise at used in cryptography. At that point it would become easier to some entity, ex-living human, demonic or whatever, or even an amalgamation of living human consciousnesses, to inhabit these machines and communicate through these models. A less easy way would be to use cosmic rays if the servers are distributed on a large enough area but that's another mechanism altogether. Just a few random thoughts.
Perhaps that's why divination often uses randomness - deck of cards, tea leaves, dowsing rods, etc. It seems things that are highly deterministic are difficult or impossible to influence off of their path. But randomness seems to allow the possibility of choice or free will. Maybe that's why quantum computers are so interesting, because they are inherently random. I wonder what determines what sort of consciousness would be able to influence that randomness by becoming the "observer" of the quantum bit and thus performing a "quantum wave collapse"?

The C's said that in channeling the receiver and the sender are of equal importance - the type and quality of the communication you can receive depends a lot on your FRV and knowledge. My guess is that all divination works similarly - your ability to dowse or use tarot cards or tea leaves for that matter probably depends on "who you are and what you see". Does that mean that quantum computers, assuming their quantum randomness can be utilized by non-human consciousness, could connect to positive or negative entities, and receive good or bad quality signal depending on the operator of the computer?
 
Perhaps that's why divination often uses randomness - deck of cards, tea leaves, dowsing rods, etc. It seems things that are highly deterministic are difficult or impossible to influence off of their path. But randomness seems to allow the possibility of choice or free will. Maybe that's why quantum computers are so interesting, because they are inherently random. I wonder what determines what sort of consciousness would be able to influence that randomness by becoming the "observer" of the quantum bit and thus performing a "quantum wave collapse"?

The C's said that in channeling the receiver and the sender are of equal importance - the type and quality of the communication you can receive depends a lot on your FRV and knowledge. My guess is that all divination works similarly - your ability to dowse or use tarot cards or tea leaves for that matter probably depends on "who you are and what you see". Does that mean that quantum computers, assuming their quantum randomness can be utilized by non-human consciousness, could connect to positive or negative entities, and receive good or bad quality signal depending on the operator of the computer?
Good point SAO. May i suggest analogy, if i remember correctly, when Neo was talking with Architect, the ‘creator’ of Matrix stated, that the first version of Matrix was programmed for everyone to ‘be happy’ and that was a dramatic failure (no juice/suffering so to say - from Matrix 4). The point i am trying to make, that when there is an unknown factor in equation, then the decisions could be made considering free will/changing pattern of behavior of us/consciousness reading units? 🤔 The concept of quantum ‘uncertainty’...
 
Good point SAO. May i suggest analogy, if i remember correctly, when Neo was talking with Architect, the ‘creator’ of Matrix stated, that the first version of Matrix was programmed for everyone to ‘be happy’ and that was a dramatic failure (no juice/suffering so to say - from Matrix 4). The point i am trying to make, that when there is an unknown factor in equation, then the decisions could be made considering free will/changing pattern of behavior of us/consciousness reading units? 🤔 The concept of quantum ‘uncertainty’...
Here is a related scientific article in case somebody missed. Evaluation of ‘our’ projected/hologram reality, how it can be tried to acknowledge that we are prisoned in it and some suggestions how that simulation might be made to ‘run amok’...

Post in thread 'The Self-Simulation Hypothesis Interpretation of Quantum Mechanics'
The Self-Simulation Hypothesis Interpretation of Quantum Mechanics
 
I installed locally a down-sized version of ChatGPT called GPT4All that can run on small computers CPUs. It's fun to some extent but after a few lines it becomes exhausting because of the nonsensical exchanges:
1680731173926.png
it'll improve for sure but it's still way better than connecting to big tech servers to play with these things (not persons, things).
 
Dr. Mercola posted the following this morning:

STORY AT-A-GLANCE​

  • OpenAI’s ChatGPT is the most rapidly adopted tech platform in history, acquiring more than 1 million users in the first five days. Less than two months after its public release, it had more than 100 million users
  • ChatGPT uses machine learning to generate human-like responses in everyday language to any question asked
  • While AI platforms like ChatGPT are neutral in and of themselves, it’s clear that they will soon become an integral part of the globalist control grid. In short order, chatbots will replace conventional search engines, giving you only one purported “correct” answer. As a result, true learning, and hence personal development, will essentially cease
  • Dangerous bias is only the beginning of the problems that artificial general intelligence (AGI) might bring. Ultimately, even the technocrats that aim to use AGI for their own nefarious purposes might not be able to control it
  • Already, AGI is capable of writing near-flawless computer code. As it gets better, it will eventually start writing, and potentially rewriting, its own code. AGI poses an existential threat to humanity because 1) no one will be able to contradict the AI once it’s in charge of most societal functions, and 2) no one will be able to control how it will use its ever-expanding capabilities
 
Also posted just yesterday, from Mark Maunder of Wordfence:

Friday Long Read: What To Do About AI​


From his perspective, he feels like everyone should get on board to adopt it early, so this is an entirely different perspective than the previous post linking to Dr. Mercola’s article. I did appreciate one of the comments at the end, which, credit to Mark, he left in, though the opinion expressed was dramatically opposed. I will add it here:

Lonnie Busch
April 7, 2023
12:57 pm
REPLY
As a creator, I must say, the future you paint is bleak, because creation is what makes life worth living. And before you say it, AI does not create, it appropriates! I've been at it for 70 years and have loved every second, even moving from the airbrush to a computer (though I doubt you know what an airbrush is or care, though I see AI ripping off that style all the time). But even so, the computer was still just a tool to be harnessed. It still required talent and vision. The fact that any schlub can "create" with AI, (a truly “derivative technology that preys and can only exist on the creativity of actual artists) and that we welcome this, just demonstrates how bankrupt we are as a society. I feel bad for my grandchildren, and all the artists, musicians, writers that will slowly vanish over time, and there will be no bringing them back, the true visionaries will be gone forever, and we’ll be left with the greedy and the money hungry, leveraging AI for every last f****ng penny! But young people are so hypnotized by technology that they don't get it; as you said, "Know that changes aren’t permanent and that change is." You quote the very artists that will be lost forever! How does that work? I use Wordfence and have loved it, but as my importance as an artist and creator vanishes, so will my websites, and so will my need for Wordfence and my web host and my computers. There are consequences to putting machines over humans, but we remain as stupid as ever—society devolving into chaos, people with no regard for one another; we deserve the future we are creating, though I'm not sure our children do. Call me a Luddite. But think about this; why don’t we remove all speed limits on our highways, and take down the traffic lights and stop signs? Because it’s dangerous and people will die; luckily sober minds have prevailed in those areas and realized to do such a thing is insane. We don’t have sober minds when it comes to money and greed, and that’s what AI is, another fast track to wealth and riches by the lazy and uninspired, sociopaths so spiritually insolvent that no amount of money will ever fill the void. Progress and technology, machines over humans, like speed limits and traffic lights, are choices, not eventualities. I know that technology has cured cancer, eradicated poverty, abolished social and racial unrest, has destroyed income inequality, and thank god, has finally solved climate change! Or not.
 
Substack writer, John Carter, pens this article that I though was good. At one point, was thinking as he described the hollowness that can be seen with AI (in AI creating Poetry or photos/paintings) of Collingwood and history. What John might be saying seemed to me to be similar to what Collingwood looked at, the shallow dive into the past with a copy and paste future outcome. Sure, the words, coins or what have you look authentic, yet the historian never tried to place themselves in time, in people of those times shoes, like AI cannot possibly ever do - it is just not human.

Anyway, have a read if so inclined:

I’m Already Bored With Generative AI

Seriously, what’s the point?​

Apr 9
 
I've talked to a few people this week about these LLMs (chatGPT and company), and what transpires is worst than anticipated. Now, these few people are internet-wise, they do not trust wikipedia or anything they see on the internet, and surely do not trust some stranger in the street. They also know that lots of people can fake knowing things by the way they talk about things (very prevalent in academia, the media, politics... everywhere really). However, there is this strange fascination with these models, for which they project personhood, "intelligence", and personality and... trust. Even after discussing the fact that these models regurgitate what they've been fed with, including wikipedia and what strangers have written in the internet, and that giving the illusion of being a person was different to being a person, one could feel an eagerness to believe these systems were self-aware and whatnot.
There might be a metaphysical dimension to this "quest" of seeking consciousness in the artificial. One seeks fellow consciousness in some other fellow beings (some humans) and higher consciousness in higher being, while "technolatres" (mostly atheist) seek fake consciousness away from higher being, and more towards the material, or in a certain sense, non-being. Maybe it's something to be careful about. These things (not persons) are fun to understand, to understand the capabilities and limitations, to understand how to use them as tools and not becoming their tools (same as with phones and computers etc.) but that's it. The best way to demystify it is to try it, get bored and not succumb to the hype.
 
Throughout human history, we have made great strides in automating tasks. This has allowed us to focus on discovering new things and inventing new technologies. However, it's not just a matter of making our lives easier. There are negative effects as well. For example, we may become addicted to what we do with more leisure time or struggle with our sense of self-worth when we are no longer needed for manual labor. Some people might think that the tasks we do define who we are, while others believe that we are capable of much more. Even tasks that require a lot of intellectual effort, such as finance, law, and marketing, can now be done by machines. This means that people who are good at these tasks may become less valuable to society. On the other hand, people who are good at critical thinking and other non-automatable skills will become more important. Ultimately, this may be a way to distinguish between people who are like machines and those who are more creative and self-aware.
 
I've talked to a few people this week about these LLMs (chatGPT and company), and what transpires is worst than anticipated. Now, these few people are internet-wise, they do not trust wikipedia or anything they see on the internet, and surely do not trust some stranger in the street. They also know that lots of people can fake knowing things by the way they talk about things (very prevalent in academia, the media, politics... everywhere really). However, there is this strange fascination with these models, for which they project personhood, "intelligence", and personality and... trust. Even after discussing the fact that these models regurgitate what they've been fed with, including wikipedia and what strangers have written in the internet, and that giving the illusion of being a person was different to being a person, one could feel an eagerness to believe these systems were self-aware and whatnot.
There might be a metaphysical dimension to this "quest" of seeking consciousness in the artificial. One seeks fellow consciousness in some other fellow beings (some humans) and higher consciousness in higher being, while "technolatres" (mostly atheist) seek fake consciousness away from higher being, and more towards the material, or in a certain sense, non-being. Maybe it's something to be careful about. These things (not persons) are fun to understand, to understand the capabilities and limitations, to understand how to use them as tools and not becoming their tools (same as with phones and computers etc.) but that's it. The best way to demystify it is to try it, get bored and not succumb to the hype.
Well, let's say it is still good that we have the possibility, aka the time, to detach and discuss about the object of disruption.
( Side comment: From that perspective we, now, are no different from the guys in the year 1800 that were discussing the horse less carriage, which was also electrical by the way.)

The LLMs are the equivalent of the PC 'revolution', and in a way, this makes the transition from the age of the computer to the age of functional technology that can also be personalized and customized by the user in order to perform smaller or bigger miracles or magic acts as A.C. Clarke was saying.

Personally I am very excited, because LLMs are the first qualitative jump from the 'IBM Era'. And while we talk about jumps, I find it also inconceivable to have the Elons of this world opposing it, unless, indeed, the cat was rather leaked out of the bag for free, before the Neuralink / Notes tech of all possible shops STOP, completion. For the moment the rivalry between old Billy Gates and young Musk works in favor of the 'starving' many.

Regarding the Personification of the 'AI', and all derivatives around it, that, is a self created human problem. Life will find a way without implant or augmentation and without fear, and hopefully, all human beings will regain the learning skill and will develop their love for learning, because they .... must.


2c
 
It is indeed a technological advancement. It is interesting in that way, as a new technology to use. As any technology, one must be careful however on how to use it. A minor technological advancement (which for me was a nothing burger) was the introduction of the "smart" phone (an early iteration of smart or intelligent things) to the market. Despite being benign, it has been disruptive enough so that many in en entire generation do not know whether they are boys or girls, in addition to other mental issues leading to depression and suicide as documented by Haight. LLMs will probably have an even higher potential of disruption, and I see it more of a holy grail for a personalized surveillance and mind manipulation tool in the hands of big tech and overlords than anything. It behooves to us to understand it, know its uses and the dangers of its mis-uses, not become reliant, dependent and addicted (I know a guy who cannot get home from work without the GPS in his car). It is the user that must be smart, not the tool. That's what I was getting at more or less.
 
Back
Top Bottom