Artificial Intelligence News & Discussion

I am currently using the ChatGPT for some translation, and it is still very good if you know how to use it properly. You must give him text in small chunks, and you should take a little break between chunks. If you do the opposite, you can make him do crazy stuff, like I demonstrated in my previous post. So, I don't think that its code is broken, I think that it is intentionally programmed in such way for a free 3.5 version.
Must depend on the application then. Check this out: .https://arxiv.org/pdf/2307.09009.pdf
 
Perhaps ChatGPT is first parsing the prompt and using standard human language data filtering; that's how they could have a list of some banned keywords and give generic answers about the vaccines, etc. Assuming that this is just a very clever text continuation generator, they could keep some "context" per user session that they would glue with the prompt to give the illusion of ChatGPT having memory of some sort. The context, of course, will have some grace period. The last thing is that maybe they have a differently quantized model for free and paid accounts. Meaning that predicted text continuations might not be as "precise" on a free account and are more subject to "hallucinations". Loss of precision for computations means a huge win for them in the context of memory to keep data and slightly faster computations in general. So I guess that degradation of quality was related more to cost cutting than "evolution". My 2¢.
 
Must depend on the application then. Check this out: .https://arxiv.org/pdf/2307.09009.pdf
It is an interesting study, but when it comes to translation or proofreading, I can demonstrate a huge difference in quality of response not between a few months, but between a few minutes! I think it's possible that such a factor also played a role in that study as well, but they were probably not looking for it specifically. Probably a lot of factors are involved in working of ChatGPT. What if too many people are asking questions at the same time? Is it going to increase the time that you need to wait for an answer, or is the quality of an answer going to degrade because it is going to use less time for thinking about an answer? We will see how the things will develop as AI infrastructure catches up with demand.
 

A highly successful software entrepreneur, Johnson – who is actually 45 – sold a payment processing company to eBay in 2013 for $800 million before launching a biotechnology company called Kernel. With this enormous wealth behind him, Johnson launched Project Blueprint two years ago. It started with a battery of tests that measured all 78 organs in Johnson’s body, sampling his blood, saliva, stool and urine. He then went through MRI scans, ultrasounds; fitness and DNA methylation tests – “Hundreds of measurements across my body at frequent intervals,” Johnson says. The data was gathered and analysed at an organ-by-organ level and compared to what it would look like for a healthy person aged 10, 20, 30 and 40. Then, a team of more than 30 doctors and experts tried to figure out how to get Johnson’s body back to an earlier age, based on mass studies of the scientific literature.

“We would gather all the evidence, we would create clinical guidelines, and then we'd implement the protocols,” he says. “My daily lifestyle is a result of that process.”

Whatever the protocol tells him to do – and there are around 100 different actions or requirements codified for every single day – Johnson does. They’re augmented by a range of treatments – Johnson has completed five in the last 72 hours before we speak – including acid peels and laser therapies. “When I go to bed to the moment I wake up, my entire life is structured around the system,” he says. “What's interesting to me is, my team and I have built a system based upon science and measurement that better cares for me that I can care for myself.”
I have found this guy in an article somewhere and I dont know why he is trending just recently. I find his perseverance and discipline commendable. He has a very strict routine and consumes 111 tablets a day to maintain his health. His existential crisis if I understood it correctly is a factor on why he wanted to reverse aging. I find his ideologies interesting though I find his statement to "surrender our bodies to algorithms" very off.
 
This Jonhson guy, is he living? There is a difference between living and being alive. Also, this is the kind of a situation where the person is so obsessed with living forever (that's the subtext anyway) that they end up someday receiving a piano on the head. Irony is the most powerful force in the universe.
 
This Jonhson guy, is he living? There is a difference between living and being alive. Also, this is the kind of a situation where the person is so obsessed with living forever (that's the subtext anyway) that they end up someday receiving a piano on the head. Irony is the most powerful force in the universe.
In his case, the difference is between living and not dying... if he won't allow himself to have late night, how will he ever care for anyone else, let alone his father? despite what he says in that short intro, he's doing it for himself... maybe he is incapable of conceiving anything past the passing of the body and hence his logical conclusion of exist forever.

There are so many different ways to look at life, you live until it's your time, you are guaranteed passing, so why not do it in a meaningful way? which implies sacrifice, of life.. you're going to spend life doing what you do.. why not living? I am sure some late nights in my life have made me older, but I would not change some of those for anything, not for another week.
 
This Jonhson guy, is he living? There is a difference between living and being alive. Also, this is the kind of a situation where the person is so obsessed with living forever (that's the subtext anyway) that they end up someday receiving a piano on the head. Irony is the most powerful force in the universe.
That's true. The interviewer in the video asked if he could change one thing about the world and what it could be, he answered existence. He is obsessed with longevity and does that through the algorithms or AI. He mentioned his mission is for the human race to survive (for as long as possible) though health is just one factor he has seriously worked on. How about planetary catastrophes? Cosmic phenomena which can also impact our survival?
if he won't allow himself to have late night, how will he ever care for anyone else
He has mentioned that he is having a hard time finding the right partner because he is very strict with sleep that he cannot allow anyone with him in bed. I can only assume that he also is thinking of not having kids.
He said he has thought and had done all things possible to cure his depression. Perhaps he wasn't satisfied of the results? Though I think he just said that so he can push his idea of AI into our mind--there is no other way, surrender to AI-and a marketing strategy for his protocol and also why he is trending now. He is about to launch the starter kit soon after all.

This is him before the protocol. He is totally a different person(?) now.

1697808887019.png
 
He said he has thought and had done all things possible to cure his depression. Perhaps he wasn't satisfied of the results? Though I think he just said that so he can push his idea of AI into our mind--there is no other way, surrender to AI-and a marketing strategy for his protocol and also why he is trending now. He is about to launch the starter kit soon after all.
Well, one of the best cures or treatments for depression is indeed to think of someone else other than you, which requires sacrificing the "divine" order you think you deserve to create around you.

I am not surprised that being so self involved, he can't crack the code for depression, no matter what he tries. Having experienced depression in my life, I can say that it is a phenomenon in which you're completely focused on yourself all the time, and what you want, all the time.
 


Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster -sources
Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
The previously unreported letter and AI algorithm was a key development ahead of the board's ouster of Altman, the poster child of generative AI, the two sources said. Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.
The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing. Reuters was unable to review a copy of the letter. The researchers who wrote the letter did not immediately respond to requests for comment.
According to one of the sources, long-time executive Mira Murati mentioned the project, called Q*, to employees on Wednesday and said that a letter was sent to the board prior to this weekend's events.
After the story was published, an OpenAI spokesperson said Murati told employees what media were about to report, but she did not comment on the accuracy of the reporting.
The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.
SUPERINTELLIGENCE
Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.
In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by superintelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Against this backdrop, Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to superintelligence, or AGI.
In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a gathering of world leaders in San Francisco that he believed AGI was in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.
 
Somewhat IA related : An interesting preach connecting the dots about transhumanism. Laura Aboli.

View attachment 86075


I thought she made a fantastic speech, truly impressive and straight forward !

The original of her speech was a bit longer; 12 minutes. From the Better Way Conference on 3 June 2023

 
Below is the letter to the board that is circulating on the internet. It looks nothing more than a standard Silicon Valley toxic wo(r)k(e) culture around a narcissistic leader, served in the sauce of an incredibly important event. The result of the drama is the board being replaced, with the geeky and autistic CEO of Quora staying. Lord, have mercy, those are the people who oversee the development of Artificial General Intelligence? Hopefully, they won't be able without some serious ramp-up of computing power and storage.
11/21/2023

To the Board of Directors of OpenAI:

We are writing to you today to express our deep concern about the recent events at OpenAI, particularly the allegations of misconduct against Sam Altman.

We are former OpenAI employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent.

We believe that the Board of Directors has a duty to investigate these allegations thoroughly and take appropriate action. We urge you to:
  • Expand the scope of Emmett's investigation to include an examination of Sam Altman's actions since August 2018, when OpenAI began transitioning from a non-profit to a for-profit entity.
  • Issue an open call for private statements from former OpenAI employees who resigned, were placed on medical leave, or were terminated during this period.
  • Protect the identities of those who come forward to ensure that they are not subjected to retaliation or other forms of harm.
We believe that a significant number of OpenAI employees were pushed out of the company to facilitate its transition to a for-profit model. This is evidenced by the fact that OpenAI's employee attrition rate between January 2018 and July 2020 was in the order of 50%.

Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.

Many of us, initially hopeful about OpenAI's mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI's work.

We provide concrete examples of Sam and Greg's dishonesty & manipulation including:
  • Sam's demand for researchers to delay reporting progress on specific "secret" research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough. Those who questioned this practice were dismissed as "bad culture fits" and even terminated, some just before Thanksgiving 2019.
  • Greg's use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance.
  • Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.
  • Sam's discreet, yet routine exploitation of OpenAI's non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.
  • The Operations team's tacit acceptance of the special rules that applied to Greg, navigating intricate requirements to avoid being blacklisted.
  • Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.
  • Sam's incongruent promises to research projects for compute quotas, causing internal distrust and infighting.
Despite the mounting evidence of Sam and Greg's transgressions, those who remain at OpenAI continue to blindly follow their leadership, even at significant personal cost. This unwavering loyalty stems from a combination of fear of retribution and the allure of potential financial gains through OpenAI's profit participation units.

The governance structure of OpenAI, specifically designed by Sam and Greg, deliberately isolates employees from overseeing the for-profit operations, precisely due to their inherent conflicts of interest. This opaque structure enables Sam and Greg to operate with impunity, shielded from accountability.

We urge the Board of Directors of OpenAI to take a firm stand against these unethical practices and launch an independent investigation into Sam and Greg's conduct. We believe that OpenAI's mission is too important to be compromised by the personal agendas of a few individuals.

We implore you, the Board of Directors, to remain steadfast in your commitment to OpenAI's original mission and not succumb to the pressures of profit-driven interests. The future of artificial intelligence and the well-being of humanity depend on your unwavering commitment to ethical leadership and transparency.

Sincerely,
Concerned Former OpenAI Employees

Contact​


We encourage former OpenAI employees to contact us at formerly_openai@mail2tor.com.
We personally guarantee everyone's anonymity in any internal deliberations and public communications.

Further Updates​


Updates will be posted at board.net

Further Reading for the General Public​

 
Back
Top Bottom