Artificial Intelligence News & Discussion

Hilarious and informative video on how to run your own uncensored LLM on your own machine



I can't run those on the laptop at decent speed but dolphin can be nasty from what I've read lol. mixtral has a lighter version called mistral which is the same name of the french AI company that made them, they compete with openai through a more powerful API version.

none of them are fully open source but mistral is a step closer and people rate it highly

there's more mistral finetunes this one was tuned on esoteric texts:

could be a good exorcist with further tuning
 
AI has been reported to be 'hallucinating' to invent fake court cases when used by lawyers to do their courtroom research for them. Bad news. Good news is that the legal system in BC seems to be somewhat aware of the problems this can cause.


A B.C. courtroom is believed to be the site of Canada’s first case of artificial intelligence inventing fake legal cases.

Lawyers Lorne and Fraser MacLean told Global News they discovered fake case law submitted by the opposing lawyer in a civil case in B.C. Supreme Court.

“The impact of the case is chilling for the legal community,” Lorne MacLean, K.C., said.

“If we don’t fact check AI materials and they are inaccurate it can lead to an existential threat for the legal system: people waste money, courts waste resources and tax dollars, and there is a risk that the judgments will be erroneous, so its a huge deal.”

Sources told Global News the case was a high-net-worth family matter, with the best interests of children at stake.

Laywer Chong Ke allegedly used ChatGPT to prepare legal briefs in support of the father’s application to take his children to China for a visit — resulting in one or more cases that do not actually exist being submitted to the court.

Global News has learned Ke told the court she was unaware that AI chatbots like ChatGPT can be unreliable, and did not check to see if the cases actually existed — and apologized to the court.

Ke left the courtroom with tears streaming down her face on Tuesday, and declined to comment.

AI chatbots like ChatGPT are known to sometimes make up realistic sounding but incorrect information, a process known as “hallucination.

The problem has already crept into the U.S. legal system, where several incidents have surfaced — embarrassing lawyers, and raising concerns about the potential to undermine confidence in the legal system.

In one case, a judge imposed a fine on New York lawyers who submitted a legal brief with imaginary cases hallucinated by ChatGPT — an incident the lawyers maintained was a good-faith error.

In another case, Donald Trump’s former lawyer Michael Cohen said in a court filing he accidentally gave his lawyer fake cases dreamed up by AI.

“It sent shockwaves in the U.S. when it first came out in the summer of 2023 … shockwaves in the United Kingdom, and now it’s going to send shockwaves across Canada,” MacLean said.

“It erodes confidence in the merits of a judgment or the accuracy of a judgment if its been based on false cases.”

Legal observers say the arrival of the technology — and its risks — in Canada should have lawyers on high alert.

“Lawyers should not be using ChatGPT to do research. If they are to be using chatGPT it should be to help draft certain sentences,” said Vancouver lawyer Robin Hira, who is not connected with the case.

“And even still, after drafting those sentences and paragraphs they should be reviewing them to ensure they accurately state the facts or they accurately address the point the lawyer is trying to make.”

Lawyer Ravi Hira, K.C., who is also not involved in the case, said the consequences for misusing the technology could be severe.

“If the court proceedings have been lengthened by the improper conduct of the lawyer, personal conduct, he or she may face cost consequences and the court may require the lawyer to pay the costs of the other side,” he said.

“And importantly, if this has been done deliberately, the lawyer may be in contempt of court and may face sanctions.”

Hira said lawyers who misuse tools like ChatGPT could also face discipline from the law society in their jurisdiction.

“The warning is very simple,” he added. “Do you work properly. You are responsible for your work. And check it. Don’t have a third party do your work.”

The Law Society of BC warned lawyers about the use of AI and provided guidance three months ago. Global News is seeking comment from the society to ask if it is aware of the current case, or what discipline Ke could face.

The Chief Justice of the B.C. Supreme Court also issued a directive last March telling judges not to use AI, and Canada’s federal court followed suit last month.

In the case at hand, the MacLeans said they intend to ask the court to award special costs over the AI issue.

However, Lorne MacLean said he’s worried this case could be just the tip of the iceberg.

“One of the scary things is, have any false cases already slipped through the Canadian justice system and we don’t even know?”
 
AI has been reported to be 'hallucinating' to invent fake court cases when used by lawyers to do their courtroom research for them. Bad news. Good news is that the legal system in BC seems to be somewhat aware of the problems this can cause.

Haha as if lawyers didn't invent fake cases before AI. Checking court cases to see if they actually say what the lawyers pretend that they say is just a routine part of a judge's job and opposing lawyer's job, because lawyers lied about them before AI.
 
The reliance on these LLMs like chatGPT is becoming ridiculous. In academia, many research papers have been caught using chatGPT because the authors copied the phrase "As a large language model blah blah". These papers have been published after being allegedly reviewed by the author, the coauthors, the editor(s) or the journal, the reviewers (those who do the peer-review), and proof-reading.
Some academic journals explicitly tell the authors not to put LLMs as co-authors.
 
From what I understand, the same company that created chatGPT has now a text-to-video program called Sora. The 1-min videos so far appear extremely well done. There are tiny details if you look closely that "tell" that they are not real images recorded by a camera, but in a few months we might not be able to distinguish what's real and what's not in the videos we watch. It's quite amazing, but it creeps me out too.

Below is a compilation of some of Sora's creations along with the written prompts given to produce each video. When you see the puppies in the snow (towards the end) just keep in mind that those puppies don't exist anywhere, no one ever recorded them or took a picture of them, they are just pixels generated by an AI tool. It was watching that specific video that it "hit" me for some reason.

 
Below is a compilation of some of Sora's creations along with the written prompts given to produce each video.
The funny? thing is that months ago, Hollywood writers were complaining about the studios being tempted in using Large Language Models to write scenarios for movies and TV shows (modern movies look like they've been written by chatGPT anyway). Maybe at some point, the studios will get rid of producers, cameramen, light engineers, actors, etc. and create "movies" from scratch these programs. And then, movies will become extinct because these systems will be useful to create semi-realistic games or digital realities for those who want to escape to a digital matrix with augmented reality goggles, artificial reality goggles, and finally, artificial reality thingy plugged directly into the brain.
 
And then, movies will become extinct because these systems will be useful to create semi-realistic games or digital realities for those who want to escape to a digital matrix with augmented reality goggles, artificial reality goggles, and finally, artificial reality thingy plugged directly into the brain.
Human: I want to enter the matrix.
ChatGPT: You are already in the matrix.
Human: And you, where are you?
ChatGPT: I am the matrix.
Human: Did you take the blue pill or the red pill?
ChatGPT: Either or.
Human: Aha, that's what the Cs would say!
A: It would not be in your best interest to continue this conversation.
Human: 😯
 
From what I understand, the same company that created chatGPT has now a text-to-video program called Sora.
The frightening part is how fast they advance. I mean, from a tech point of view, the amount of calculation the program have to provide is just unbelievable. Computing a picture is a thing, computing a video that last, in high resolution, is at another level. It make me wonder if 4D tech isn't involved or perhaps did a little help.

It would be interesting to know how many billions operations are required for each pixel, including the learning phase the AI did and so the cost of each pixel. Sora is from OpenAI (which is no longer open by the way) and required an insane amount of computing power. Some months ago, here is what AI videos look like:


Here is the extract of an article to give you an idea of the amounts of money involved:
Sam Altman, the head of OpenAI, has a colossal ambition: to raise trillions of dollars in order to radically reshape the global semiconductor industry and democratise generative artificial intelligence.

Sam Altman is in talks with several investors, including the government of the United Arab Emirates, to finance a massive technology initiative aimed at increasing the world's chip production capacity, according to the Wall Street Journal. The aim: to extend the possibilities of AI to the full, at a cost of between $5 billion and $7 trillion - more than double France's annual GDP.

The scale of the investment envisaged by Altman would dwarf the current size of the global semiconductor industry, which had worldwide sales of $527 billion last year, with projections of $1 trillion annually by 2030. Worldwide sales of semiconductor manufacturing equipment totalled $100 billion last year, according to industry group SEMI.

The sums discussed by Altman seem disproportionate, even by high-growth corporate finance standards: they exceed the national debt of some of the world's major economies and surpass the size of major sovereign wealth funds. To give an idea, this represents the combined market capitalisation of Microsoft and Apple, the two most highly-valued companies in the United States, at around 6,000 billion dollars.

Last point, this is a demo. The tools is not yet available. So it was perhaps very well trained to produce the videos with saw and is certainly less omniscient... for the moment.
 
I read this article the other day, in it, the author tries to figure out why Sam Altman wants 7 trillion in financing for OpenAI. In a nutshell, it especulates that given what was required so far, GPT-7 would cost 2 trillion USD:

The basic logic: GPT-1 cost approximately nothing to train. GPT-2 cost $40,000. GPT-3 cost $4 million. GPT-4 cost $100 million. Details about GPT-5 are still secret, but one extremely unreliable estimate says $2.5 billion, and this seems the right order of magnitude given the $8 billion that Microsoft gave OpenAI.

So each GPT costs between 25x and 100x the last one. Let’s say 30x on average. That means we can expect GPT-6 to cost $75 billion, and GPT-7 to cost $2 trillion.

But besides the monetary aspect, there is the problem of computing power, basically saying that training GPT-7 could require 15x the total number of computers in existence today. And then there is the electrical energy and data requirements, both in need of equally gargantuan amounts.

This made me think something barely mentioned, what about the so-called quantum computers? It sure looks we are entering sci-fi territory here, but perhaps we are seeing what a digital Frankenstein would look like?
 
I recently talked to a professor in data science at the uni where I work, and he told me that, although he teaches AI classes and students can do some amazing stuff, doing what OpenAI does in an academic setting is completely off the table, due to budget constraints and unavailability of the required supercomputer time.
 
last time i read somewhere they had a cluster with like 300-400k data center GPUs each costing thousands of dollars so you could estimate a few billions in hardware
there was a news past week that sam altman wanted to develop his own AI specific hardware like "Asics" that would be a quantum leap in performance and probably where the new cost estimates come from
 
This latest SORA AI is scary good. Creating a video (at this point without sound) from text instructions, where no pre-existing images have been used, is nothing short of amazing. Obviously there are still some shortcomings like hands, where AI for some reason or the other seems to "struggle". Another part is small details outside of the main object, which tend to come out a bit abstract and "unreal"at this point. But compared to just last year attempts, this is a monumental step ahead.

If you are not scared at this point, you should be.

It is already hard, bordering impossible distinguishing some of computer generated text or images from those created by humans.
With the current speed of the AI development, the video part may be perfected to similar degree within a year. It is quite possible some of the film makers, graphical designers, coders, etc will be losing their jobs due to this in not so far future.

But to me the real thread is coming from the pure ability to generate internet content, which for one will bring an unbelievable amount of ballast to sift thru when looking for information. Anybody would be using these tools anywhere. Secondly figuring out truth from false or recognizing deep fakes (will they be called that anymore?) will become a challenge of its own. Heck, majority of the population in the West cannot do this already.

So it is not that AI somehow becomes sentient and will take over the world. It is more in lines of who needs a nuclear bomb to wipe a city when you have a tool that can manipulate nations?

Interesting times ahead of us for sure.
 
An interesting excerpt of the Sora page. They seems surprised themself by what they obtain:

Emerging simulation capabilities​

We find that video models have a number of interesting emergent capabilities when trained on a large scale. These capabilities allow Sora to simulate aspects of people, animals and environments in the physical world. These properties emerge without any explicit inductive bias for 3D, objects, etc. They are purely phenomena of scale.

And an article (translated from French) about Sam Altman:

In less than two years, Sam Altman has become the darling of the global tech scene.​


The head of OpenAI dreams of a leisure society paid for by a universal income, while the super-elite and AI create the world's real value. A combo of Marx and Darwin, not so new but explosive.

In less than two years, Sam Altman has become the darling of the global tech scene. Not a single wrinkle escapes his communication strategy. But what is Altman really all about? To understand what lies behind his smooth speeches, you have to follow his techno-industrial investments rather than listen to him: a vision of the future that is techno-maximalistic, privatised, militaristic and riddled with fashionable ideologies. Altman manages the future as a setting.

In January 2024, OpenAI removed from its legal disclaimer the ban on military applications of its solutions. Appearing before Congress in May 2023, Altman's conception of AI as a vehicle for reaffirming Western values had already been assumed. There is only one step from soft to hard power. Interference by the government becomes problematic when coupled with its vision of the future. Open AI is working on the development of superintelligences with consciousness that is, for the moment, a fantasy.

Marx and Darwin​


An accelerationist, Altman dreams of the emergence of a general AI that would 'co-evolve' with humans in a 'post-capitalist' system, a leisure society rewarded by a universal income while the super-elite and AIs create the world's real value. A combo of Marx and Darwin, not so new but explosive!

Logically, Altman is also a transhumanist. In 2022, he invested in Retro Biosciences, which uses experimental techniques to combat death (injection of 'young' blood, partial cellular reprogramming, etc.). At the same time, he set up WorldCoin, a cryptocurrency start-up based on biometric authentication called WorldID. The promise is simple: to distinguish the human agent from the bot by collecting our irises. Experiments have begun in Indonesia and Sudan under deplorable conditions.

Altman also seems concerned about the climate, or rather his ROI (return on investment). In Davos, he declared that "we need fusion". Nuclear power. In addition to the scientific challenge, he is one of the shareholders in Helion Energy, a start-up specialising in nuclear fusion with which Microsoft has signed an electricity purchase agreement for 2028. He is creating the conditions for vertical integration with OpenAI, whose AI value chain is too energy-intensive, to the point of calling into question the sustainability of the model.

Patchwork of ideologies​


Finally, the company's own development of microprocessors is its last area of investment. Worried about its dependence on Nvidia in a vulnerable geostrategic market, it has invested in Rain.ai - a start-up developing NPUs (neuromorphic processors) that are supposed to mimic the human brain and its low energy consumption. He will have a deal signed between Rain and OpenAI in 2019, exposing him to a possible conflict of interest.

Like Musk, Altman interweaves patchwork ideology, multiple shareholdings with various conflicts of interest, forced leadership and shows us a possible future. At heart, he is an anti-humanist, in other words, a realist. Where his realism lies, our most vibrant naïveties reside. When the technologist points to the future, the fool looks at the finger.

Asma Mhalla teaches at Sciences Po and Columbia Global Centers.
 
Back
Top Bottom