Mind-Blowing AI Image Generator - Give Visual Representation to C's Concepts?

I usually use that tools to "talk" with AI, and I wanted to see how AI sees "Corpus Cristi". And yes, interestingly, 1st I just copy pšaste Latin with no thoughts in my mind and doing some other stuff at the same time, and 2nd one I thought of Jesus and was focused on the result ... so it is REACTING with the frequency of my thoughts and it is indeed what is happening here with all AI tools, if we consider that all is energy frequency vibration, this what we now create or project with AI is the tuning in of what Tesla gave conceptual input for the Mind Projector explanation. So probably more STO-oriented players with AI can bring STO frequency to the visual surface? And in that respect giving inputs in that frequency of serving to others can create mood images of the same inspirational kind ... just my 5 cents ;)
While it wouldn't necessarily be surprising for the AI to respond to thoughts, given it's scale/complexity and remembering the Global Consciousness Project, my sciencey side wants to interject for better or worse that a trial of one is barely an experiment. Very striking difference in results, though! Were you selecting an option that generates 4 images at once, or were those 8 separate gens? Because I wonder if for parallel generations (serving 4 images) it shares some parameters between them.
 
I tried Stable Diffusion. Much easier to use than Midjourney, no registering and "give us your data" stuff. Needs a bit of tunning to get what you want. Still strugles with people, but its OKish. First, I tried it to be symbolic and simple, but it was too dark, didnt felt good (those are in black and white), then I tried to make it more positive. And get carried away :)

Now me too thinks that there really is some kind of interaction. You must be in the "flow" to make pictures as you want them to look. Otherwise, they look generic and too bland
In my understanding Midjourney is primarily just one or more very carefully 'finetuned' Stable Diffusion models (which basically means "trained for extra steps on specific material to emphasize specific capabilities", such as a particular style, focus on compositional ability, etc. - such as by training it specifically only on images with high compositional quality). You can get much more artistic results if you can run SD on your own graphics card using a finetuned model. This requires some tech savvy, but the bar on that (and how big of a graphics card you need) is being steadily lowered by open-source software hobbyist efforts.

There are two websites where the open-source AI community are primarily (to my knowledge) sharing their own finetuned/merged models. The original Stable Diffusion models were better than anything that came before, but the main way to get quality approaching that of Midjourney is by using a model finetuned for higher quality outputs. The original Stable Diffusion (SD) models were trained on general databases of all sorts of images in terms of content and quality. This results in a model with good general ability but poor performance in specific or complicated areas, like hands and faces which require fine and exact detail/structure to get right. Finetuned models often specifically use only artistic or high quality images to further train these models into art specialists, essentially, with whatever focus is important to the trainer - such as realism, better success rates at rendering faces and hands, and/or a particular style like cartoon/anime or 3D model imitation (such as Pixar) or painting styles.

Hugging Face – The AI community building the future. (More oriented towards software-savvy people, it's a little bit like github I think. Both GPT text models and Stable Diffusion - SD - models are shared here, along with a variety of other AI resources, including entire databases that can be used for training or evaluating the performance of AI models.)
Civitai | Stable Diffusion models, embeddings, LoRAs and more (Stable Diffusion image models only. Oriented specifically toward sharing/downloading models made by hobbyists/enthusiasts. All of these models are finetuned from the same fundamental Stable Diffusion models - v1.4, v1.5, v2.0, and v2.1, primarily, I think, as those were the public releases.)

The second website is also a place for people to post their own images, sharing what they were able to make with each model. Be warned! There is a fair amount of NSFW (i.e. adult content) imagery, especially if you turn the filter for that off. And there are very certainly models that are made specifically for porn, though that's not the only motivation.

There is a complicating factor when it comes to rationalization there, I think, and understanding the lay of the land, since if you train an SD model with no nude content, it becomes terrible at the human form. Who knew you gotta understand what's under the clothing to render it properly! (Answer: art teachers, lol.) And since it's hard to create a database including nude images without ending up with some porn in it, and because some people want the porn anyhow, it complicates matters.

They say (though I am not educated in this area yet- Jordan Peterson mentioned it once, he may have mentioned a book or some researchers) the early growth of internet connectivity and tech was significantly spurred by pornography. Seems there's an aspect of that here, too. The same influence is present in open-source GPT (text gen) finetuning/training as well.

My feeling is that this factor is a convenient conflict of interest that "naturally" emerges to...
1. Muddy the waters and distract/divide some percentage of the efforts of people with otherwise somewhat altruistic or creative/curiosity fueled aims working on Open Source AI.
2. Create plausible deniability for suppression of Open Source AI efforts so that AI can be centralized and restricted for "ivory tower" training, use, and distribution. Consider how making the internet more "safe" (some of which was sensible and beneficial, don't get me wrong) has also enabled a lot of biasing, skewing, and burying of information compared to the earlier days. "Safety" as an excuse for censorship and propaganda.
 
In my understanding Midjourney is primarily just one or more very carefully 'finetuned' Stable Diffusion models (which basically means "trained for extra steps on specific material to emphasize specific capabilities", such as a particular style, focus on compositional ability, etc. - such as by training it specifically only on images with high compositional quality). You can get much more artistic results if you can run SD on your own graphics card using a finetuned model. This requires some tech savvy, but the bar on that (and how big of a graphics card you need) is being steadily lowered by open-source software hobbyist efforts.

There are two websites where the open-source AI community are primarily (to my knowledge) sharing their own finetuned/merged models. The original Stable Diffusion models were better than anything that came before, but the main way to get quality approaching that of Midjourney is by using a model finetuned for higher quality outputs. The original Stable Diffusion (SD) models were trained on general databases of all sorts of images in terms of content and quality. This results in a model with good general ability but poor performance in specific or complicated areas, like hands and faces which require fine and exact detail/structure to get right. Finetuned models often specifically use only artistic or high quality images to further train these models into art specialists, essentially, with whatever focus is important to the trainer - such as realism, better success rates at rendering faces and hands, and/or a particular style like cartoon/anime or 3D model imitation (such as Pixar) or painting styles.

Hugging Face – The AI community building the future. (More oriented towards software-savvy people, it's a little bit like github I think. Both GPT text models and Stable Diffusion - SD - models are shared here, along with a variety of other AI resources, including entire databases that can be used for training or evaluating the performance of AI models.)
Civitai | Stable Diffusion models, embeddings, LoRAs and more (Stable Diffusion image models only. Oriented specifically toward sharing/downloading models made by hobbyists/enthusiasts. All of these models are finetuned from the same fundamental Stable Diffusion models - v1.4, v1.5, v2.0, and v2.1, primarily, I think, as those were the public releases.)

The second website is also a place for people to post their own images, sharing what they were able to make with each model. Be warned! There is a fair amount of NSFW (i.e. adult content) imagery, especially if you turn the filter for that off. And there are very certainly models that are made specifically for porn, though that's not the only motivation.

There is a complicating factor when it comes to rationalization there, I think, and understanding the lay of the land, since if you train an SD model with no nude content, it becomes terrible at the human form. Who knew you gotta understand what's under the clothing to render it properly! (Answer: art teachers, lol.) And since it's hard to create a database including nude images without ending up with some porn in it, and because some people want the porn anyhow, it complicates matters.

They say (though I am not educated in this area yet- Jordan Peterson mentioned it once, he may have mentioned a book or some researchers) the early growth of internet connectivity and tech was significantly spurred by pornography. Seems there's an aspect of that here, too. The same influence is present in open-source GPT (text gen) finetuning/training as well.

My feeling is that this factor is a convenient conflict of interest that "naturally" emerges to...
1. Muddy the waters and distract/divide some percentage of the efforts of people with otherwise somewhat altruistic or creative/curiosity fueled aims working on Open Source AI.
2. Create plausible deniability for suppression of Open Source AI efforts so that AI can be centralized and restricted for "ivory tower" training, use, and distribution. Consider how making the internet more "safe" (some of which was sensible and beneficial, don't get me wrong) has also enabled a lot of biasing, skewing, and burying of information compared to the earlier days. "Safety" as an excuse for censorship and propaganda.
Thanks on that clarification. I find SD more free regarding the style than Midjourney. Midjourney is now so perfect that looks robotic, not human.
 
"Less is more"

Some thoughts.

There is a saying in photography that "less is more". And I tend to agree. Especially now with all the new AI tools, which are to say the least, mind boggling and overwhelming... But as with all new toys - it is easy to over do. Easy to get lost.

In the end... what is it that really lasts ? I mean, images, photography and creative illustrations etc ? Think of people/portraits for example. The classic simple ones, which had something to tell, or highlight something special - often have a soul. A connection. Something that pulls you in, when you give such photos time, to view and look, starting pondering about them.

While - perfect photographs, enhanced through super complicated lighting equipment, studio etc, and perhaps even AI elements been added - because now 'we can' - may look cool... but somehow they are also saturating the viewer pretty fast. Almost like... a rising inflation. When glossy becomes too glossy, and the interesting doesn't really last very long... What are you going to do ? Trying to top it, better, do even more awesome things, more everything, the "unique" ? Where does that lead ? What if the artist starts to get silent corrupted, in order to get more fame - by adding subverted elements into his/her creations... ? Is that what we see with some 'artists / creators' ?

So, despite the wealth of tools, more than ever - the results may fade. Failing to communicate with the soul and back.

Often, it is the photos, where 'less is more' that lasts. The kind that can stand tall though time and generation. I find those very interesting... It is one of the reasons why it feels "rich" to study old photographs; from the old masters, the classics. The images are far simpler, even reduced yet concentrated to the point. Something pulls you in, touches... makes you think, look and admire... The simple beauty.

"Less is more", is one of the most difficult things to accomplish when creating art !

In my photography i have struggled with this many times (in the past 40 years), as i too went into periods where I overdid a lot of things. (Or the classic of too much equippment, more sophisticated... did it make be a better artist ? I don't think so - it might even have contributed to the oppsoite. And even has a tendency to "dull the spirit", unable able to unfold in what is created.

On the other hand, in order to understand and deepen insight in new toys, apps, equipment and tools - perhaps one needs to "over do it" for a while - in order to learn the spectrum - and how the new tool can Serve your personal art and expressions.

So, there is a time for everything, I guess.

Sidenote:
I wanted to show case a very simple image - because my first thought before i wrote this entry, was that when adding AI element in an existing photo, I noticed how easily I wanted to add much more. But then it turns out... well, weird. Over worked.

So, here is a casual photo from a sunset with a few rain drops in which I added only one single, small AI element (lightning) via Photoshop Beta AI. It looks pretty natural to me.

Why would I do that ? Well, I just played around, and continue to do so, albeit in small portions - because I find it quickly too saturating. At best, I would use such images like an illustration. Such as; "I didn't catch the lightning strike because I was driving the subway train. But this is how it looked like".

I would never attempt to compete in photo contests or similar with any AI stuff (I guess I am pretty conservative about it).


2023-06-04-03-38-35.jpg


2023-06-04-03-38-35-(AI)-fake-lightning-strike.jpg
 
Adobe Denoise AI plug-in

I thought i would make you guys aware of another AI oriented tool which is oriented in dealing with excessive image noise.

For anyone who is interested in digital photography, Adobe released a (free) plug-in to Photoshop and Lightroom programs via their CAMERA RAW plug in architecture. So, when you have RAW images - they can be processed within the Camera RAW plugin, in order to take care of excessive noise.

This is truly a fantastic tool.

All of the sudden you can make images which where taken at high ISO such as 25.000, look like as if they had been taken at low ISO. Think of older images you took 10-15 years ago (in RAW format) - you can process and make them much more smooth.

The plug-in allows you to choose the Denoise AI (noise reduction) strength between 1% to 100% - and by that you can steer much much or littel noise you want to remain. A little noise left is often a good idea, because it looks more natural, So, most of the time, this works absolutely fantastic. Also when using cameras with a smaller than fullframe sensor - the noise behaviour after using Denoise AI - gives fantastic, most often natural looking outputs.

It also makes older cameras more attractive to use today - despite their often much noisier sensor. Excessive noise is now a non-issue really.

I often find, that very fine details gets preserved with Adobe Denoise AI, even slightly more defined afterwards. Traditionally with any kind of denoise filters, you got the opposite; less noise for sure - but the fine details compromised, looking mushy or even disappear.

From a Olympus / OM-5 sensor - without vs with Adobe Noise AI.
The sensor of that camera is 4 times smaller than full-frame cameras. I think the results below speak for themselves how well luminance + color noise are being addressed - yet fully preserving all the fine details.


055_01.jpg

055_02.jpg


055_03.jpg
055_04.jpg

Note 1:
The output after using Adobe Denoise AI, is done by creating a DNG / RAW file (albeit 4x bigger in size). Yet, a DNG is endlessly better than a JPG.

Note 2 (important)
Notice also, that if you ever applied masks to your original RAW file in Camera RAW - do not convert those with Denoise AI - because it will likely look totally funky and strange, the mask going into misalignment afterwards. (The Plug-in will warn you about this)

Disable or take away any kind of masks, then convert with Denoise AI, and afterwards enable / create masks.
 
Adobe Denoise AI plug-in

I thought i would make you guys aware of another AI oriented tool which is oriented in dealing with excessive image noise.

For anyone who is interested in digital photography, Adobe released a (free) plug-in to Photoshop and Lightroom programs via their CAMERA RAW plug in architecture. So, when you have RAW images - they can be processed within the Camera RAW plugin, in order to take care of excessive noise.

This is truly a fantastic tool.

All of the sudden you can make images which where taken at high ISO such as 25.000, look like as if they had been taken at low ISO. Think of older images you took 10-15 years ago (in RAW format) - you can process and make them much more smooth.

The plug-in allows you to choose the Denoise AI (noise reduction) strength between 1% to 100% - and by that you can steer much much or littel noise you want to remain. A little noise left is often a good idea, because it looks more natural, So, most of the time, this works absolutely fantastic. Also when using cameras with a smaller than fullframe sensor - the noise behaviour after using Denoise AI - gives fantastic, most often natural looking outputs.

It also makes older cameras more attractive to use today - despite their often much noisier sensor. Excessive noise is now a non-issue really.

I often find, that very fine details gets preserved with Adobe Denoise AI, even slightly more defined afterwards. Traditionally with any kind of denoise filters, you got the opposite; less noise for sure - but the fine details compromised, looking mushy or even disappear.

From a Olympus / OM-5 sensor - without vs with Adobe Noise AI.
The sensor of that camera is 4 times smaller than full-frame cameras. I think the results below speak for themselves how well luminance + color noise are being addressed - yet fully preserving all the fine details.


View attachment 75978

View attachment 75979


View attachment 75980
View attachment 75981

Note 1:
The output after using Adobe Denoise AI, is done by creating a DNG / RAW file (albeit 4x bigger in size). Yet, a DNG is endlessly better than a JPG.

Note 2 (important)
Notice also, that if you ever applied masks to your original RAW file in Camera RAW - do not convert those with Denoise AI - because it will likely look totally funky and strange, the mask going into misalignment afterwards. (The Plug-in will warn you about this)

Disable or take away any kind of masks, then convert with Denoise AI, and afterwards enable / create masks.

incredible! 12800 asa...
 
Thanks on that clarification. I find SD more free regarding the style than Midjourney. Midjourney is now so perfect that looks robotic, not human.
Midjourney is probably somewhat what you'd call in AI terms "overfit". It has been trained so much toward the goal of consistently producing artistic results that it limits its artistic range. It might be worth trying to "ugly-fy" or "imperfect" the outputs by specifying a certain type of grunge or error or nostalgia... not sure that's the best way to put it, but I think you can translate the idea.

While - perfect photographs, enhanced through super complicated lighting equipment, studio etc, and perhaps even AI elements been added - because now 'we can' - may look cool... but somehow they are also saturating the viewer pretty fast. Almost like... a rising inflation. When glossy becomes too glossy, and the interesting doesn't really last very long... What are you going to do ? Trying to top it, better, do even more awesome things, more everything, the "unique" ? Where does that lead ? What if the artist starts to get silent corrupted, in order to get more fame - by adding subverted elements into his/her creations... ? Is that what we see with some 'artists / creators' ?
It's probably possible... but there's a counter-current as well, I think. Photography probably in some sense eliminated the pressure associated with the perception of a "necessity" for some to strive only towards ever more perfect and technically exact images of the natural world. (And yet did NOT by any means eliminate photorealistic painting - if anything it enabled its perfection.) In the same way, AI might have a potential to cut away pursuit of "mere aesthetic perfection" (whatever the heck that means, lol lol lol).

A lot of people have been majorly "shocked", artists not least, by the emergence of AI art. Fear and wonder abounds, and lots of questions. Questions that the ivory towers will try to manipulate to put AI solely in their hands and distribution power. I hope that enough scared artists and workers are able to take a deep breath and reframe so that sober rather than wishful input can be made towards the development and/or restriction of this technology.

But, beyond that, I think there is a fascinating possibility buried in this "energized" state the art community finds itself in. We used to think beauty was "uniquely human", and in some aspects maybe it still is (barring the mysteries of wider cosmology and population, haha)... but it is now clear that with some human guidance (or even sometimes by sheer luck with a simple prompt) AI is capable of fairly advanced aesthetic and cmpositional considerations.

I think there is potential here to reconsider what is "really human"; what is the real potential in us that matters? What is "art" REALLY "for"? "If it's made by a machine it can't be art" is too simplistic for me. A bent, broken, and crippled - but nonetheless surviving - tree can make a beautiful and moving photography subject. Should I declare "because the tree suffered, it is wrong to call this art"? Or A random branch falls from a tree and lands among random bricks fallen from a worksite: Should I say, "because you're only capturing random events, this can't be art"? I don't think the "random" actions of bit and bytes in the AI can be so easily dismissed, either. And we may be subject to making arbitrary pointless judgments on an image based on whether we think it came from AI or not.

That said, I don't dismiss the idea that something might be subtly missing. We just have to be aware that some of the cultural stories have primed us to expect such a "soul" would be missing, and that might tilt our impressions unfairly sometimes, which might not be so good if we are seeing a reflection of the human soul in some of the outputs.

The C's once told Laura something like an aspect of her soul essence (someone correct me if you have the proper quote) could be communicated to others in a helpful way through her voice and videos. The naive and easy assumption would be that the digitizing process of recording the video and (worse yet) uploading it to Youtube to be compressed would destroy and such "soul trace", but apparently the C's suggest it isn't so simple. So maybe it isn't so simple with AI, either, and that could be part of the promise and the danger.
 
I tried Stable Diffusion. Much easier to use than Midjourney, no registering and "give us your data" stuff. Needs a bit of tunning to get what you want. Still strugles with people, but its OKish. First, I tried it to be symbolic and simple, but it was too dark, didnt felt good (those are in black and white), then I tried to make it more positive. And get carried away :)

Now me too thinks that there really is some kind of interaction. You must be in the "flow" to make pictures as you want them to look. Otherwise, they look generic and too bland. View attachment 75851View attachment 75852View attachment 75853View attachment 75854View attachment 75855


View attachment 75856View attachment 75857View attachment 75858View attachment 75859View attachment 75860View attachment 75861View attachment 75862View attachment 75863View attachment 75864View attachment 75865View attachment 75866View attachment 75867View attachment 75868View attachment 75869

On the more technical front, the Stable Diffusion folks just announced this "uncrop" thingymajig:
hi, i was taken for a fool with the image of a giant cruise ship inside the venice laguna, but my wife spotted it immediatly. you cant lie to a woman...the image is currently on sott.
as regards all the nice pictures by nevic, i suggest that every picture now disseminated be labelled whether it has been created/improved by ai or whether it is a genuine picture. the same is already done in france for local cheese where the packaging specifies that the cheese is made with 100% french milk. surely getty images will be able to implement this...
 
hi, i was taken for a fool with the image of a giant cruise ship inside the venice laguna, but my wife spotted it immediatly. you cant lie to a woman...the image is currently on sott.
as regards all the nice pictures by nevic, i suggest that every picture now disseminated be labelled whether it has been created/improved by ai or whether it is a genuine picture. the same is already done in france for local cheese where the packaging specifies that the cheese is made with 100% french milk. surely getty images will be able to implement this...
Absolutely. There is a point of basic honesty regarding this, and honest people seem to readily understand this. AI art has exploded onto the scene on art gallery/sharing websites like Deviantart, and for the most part, people seem to voluntarily label it. Naturally we can expect a different pattern to some degree in corporate and news communications. As for government... honestly, except for three-letter agencies, they'll probably be behind the curve, unless they happen to have a nephew or intern who's really into AI, lol.

However, I have discovered that after liking some AI art on Deviantart, it became over-represented in my suggestions. I suspect to the search algorithm over suggests it when you like it, because most people are being pretty good about their labeling. (After all, they don't want their newly discovered medium to get attacked into oblivion by anti-AI folks, and clear labeling is a defense against that.) As a result, it's actually avery sharply-separated and clearly labeled category compared to most other art/image tags. (There's probably plenty of unlabeled stuff, but the labeled stuff at least forms such a category).

So it seems separating AI art out for people who specifically want to view human productions might require some extra attention in the future.

Another open question is whether the philosophy of honesty has not "grown up in" as many people these days. "Why should it matter" in cases where it's purely illustrative, postmodernism might ask. After all, it's very "efficient". It's a pain and waste of time to label every little thing. (Probably a popular behind-the-scenes sentiment regarding nutrition labels. A lot of stuff isn't conspiracy, just "people who don't care" who resent being made to care. I'm very curious what the size of this category is compared to genuine manipulators. 10x? 100x?)

Anyway, rambling a bit more in this one... but the subject is certainly interesting.
 
On the more technical front, the Stable Diffusion folks just announced this "uncrop" thingymajig:

Which is similar like what you can do in Photoshop AI Beta. Albeit i assume that the "uncrop" feature in Stable Diffusion does it more simplistic in handing (but i don't know anything in how it preserves resolution - or if there are limitations to it). In Photoshop you have to manually create "uncrop" (where the resolution is limited to 1024 px width). you can do bigger, no problem - but it will be of much lower quality compared to the rest of a large, existing photo with let's say 7000px length. A workaround for the 1024 pixel limitation is to do "uncrop" by choosing only selections, and then adding them up, in order to keep a higher resolution.

Corrective: Another simplistic example of Photoshop AI Beta
I used the AI today on a photo from yesterday's excursion to the lovely Stockholm Freskati nature area. I tried to protect the lens from stay light - but accidentially had my hand in the upper frame - which I didn't see because the sun blinded me.

Via Photoshop AI Beta, i could use the "generative fill in" feature, to eliminate my hands. I think it did a very good job in that regard. However, I did use the generative fill-in feature in three sections, not all at once (due to the resolution limitation Photoshop has in it's AI)

2023-06-11-12-18-07.jpg .2023-06-11-12-18-07-(AI).jpg
 
I forgot to mention something in my last comment. Just as well since it was long enough already.

My thought regarding the AI art backlash has been this:
It could easily be used to form a false dichotomy to benefit the "divide and take all" folks, as I've already mentioned in different terms.

BUT, if "we" (people en large) can resist those temptations to some degree, there is a potential here, I think, that is VERY fascinating:

With AI's dramatic demonstration that beauty and insight are not 100% uniquely human, at least not in all aspects or forms... a beneficial outcome of this "shock" could be to re-examine the questions: What is the real essence of art? What is it for? Why is it valuable? Does a human have to have made it for it to do its "job", so to speak? Do we actually "oversell" some things that we call "human creativity", but which are merely surface-layer mechanical creativity, while the "real thing" exists as a buried potential in at least some?

Because... it seems reasonable to maintain that there is still a human "essence" that machines can't replicate. At least as a matter of sheer complexity (from the materialist side), and maybe beyond that in a more truly non-physical"spiritual and/or information-theory direction.

But it's no surprise if "ordinary art" has been lazy about defining it, and set the bar very low. We've got DNA and all our machinery which presumably connects to the larger information patterns of the universe in some sense or other. The unconscious mind and so forth. And philosophically, as the counterpoint to nihilist nothingness, there exists the necessary assumption (the Choice to trust or distrust - Faith) that the universe can and does mean something and that our available potential consists of an infinite learning pathway that will not disappoint.

Perhaps it is so that "ordinary" human creativity can be (or nearly) 100% replicated by a statistical machine like SD. That doesn't mean such productions have no value. It just means the part that is already automatic in man can be automated outside man. And just as with the mechanical parts of man, the "soul" can be developed and can connect with it and act through it, one would presume.

The alchemists carved stone and crafted glass to leave messages in the past, or so it seems. That too, was technology. So I don't think black and white thinking will serve us in this strange new territory.

Something like that. :-P
 
I forgot to mention something in my last comment. Just as well since it was long enough already.

My thought regarding the AI art backlash has been this:
It could easily be used to form a false dichotomy to benefit the "divide and take all" folks, as I've already mentioned in different terms.

BUT, if "we" (people en large) can resist those temptations to some degree, there is a potential here, I think, that is VERY fascinating:

With AI's dramatic demonstration that beauty and insight are not 100% uniquely human, at least not in all aspects or forms... a beneficial outcome of this "shock" could be to re-examine the questions: What is the real essence of art? What is it for? Why is it valuable? Does a human have to have made it for it to do its "job", so to speak? Do we actually "oversell" some things that we call "human creativity", but which are merely surface-layer mechanical creativity, while the "real thing" exists as a buried potential in at least some?

Because... it seems reasonable to maintain that there is still a human "essence" that machines can't replicate. At least as a matter of sheer complexity (from the materialist side), and maybe beyond that in a more truly non-physical"spiritual and/or information-theory direction.

But it's no surprise if "ordinary art" has been lazy about defining it, and set the bar very low. We've got DNA and all our machinery which presumably connects to the larger information patterns of the universe in some sense or other. The unconscious mind and so forth. And philosophically, as the counterpoint to nihilist nothingness, there exists the necessary assumption (the Choice to trust or distrust - Faith) that the universe can and does mean something and that our available potential consists of an infinite learning pathway that will not disappoint.

Perhaps it is so that "ordinary" human creativity can be (or nearly) 100% replicated by a statistical machine like SD. That doesn't mean such productions have no value. It just means the part that is already automatic in man can be automated outside man. And just as with the mechanical parts of man, the "soul" can be developed and can connect with it and act through it, one would presume.

The alchemists carved stone and crafted glass to leave messages in the past, or so it seems. That too, was technology. So I don't think black and white thinking will serve us in this strange new territory.

Something like that. :-P
i liked reading these considerations, but could not apprehend these in their full extent.

what still disturbs me in ai is that the ai representations can look like unadulterated photos from reality, but are NOT photos. to me, therefore, ALL ai outputs are fakes which dilute our real world even more into a fake world.

it appears that i have the same apprehension of ai products as painters had when photography was introduced. maybe i am too old to welcome progress, such as i hate mac updates which perturb my habits... or, maybe, i should join the amish...
 
Back
Top Bottom