Jump to content
UnevenEdge

The AI-pocalypse is Here


matrixman124

Recommended Posts

  • 2 weeks later...

I read a column recently from a writer who, like me, is skeptical of AI. I remember agreeing with him a lot, but the only specific point he made that I remember was a dumb one. 

 

He was expressing disappointment that AI will never live up to the expectations people have for it, and then he said something like: "on the plus side, we'll never be in danger from out of control AI, but on the negative side, we'll never have robot companions that we can befriend and love."

 

Aren't there plenty of biological lifeforms that already exist that can be our friends and show us love? Why do we have to create an entire new being to fill that need? 

  • Thanks 1
Link to comment
Share on other sites

38 minutes ago, Icarus27k said:

 

 

Aren't there plenty of biological lifeforms that already exist that can be our friends and show us love? Why do we have to create an entire new being to fill that need? 

Humans suck and animals tolerate us....the idea is to have something totally subservient...like anime waifus

  • Like 2
Link to comment
Share on other sites

22 hours ago, Icarus27k said:

 

He was expressing disappointment that AI will never live up to the expectations people have for it, and then he said something like: "on the plus side, we'll never be in danger from out of control AI, but on the negative side, we'll never have robot companions that we can befriend and love."

 

Say that you've attempted to stick your dick in a Furbe without actually saying those exact words...

  • Haha 4
Link to comment
Share on other sites

https://www.cbr.com/bleach-tybw-anime-director-ai-replace-animators/

Basically they're blaming irresponsible production and wasteful direction for causing a lot of the financial problems in making anime which is fine. But then they start saying AI will help replace "lazy animators". That's not how that works and it's very fucked up

Edited by matrixman124
Link to comment
Share on other sites

https://www.404media.co/laion-datasets-removed-stanford-csam-child-abuse/?ref=daily-stories-newsletter

“The LAION-5B machine learning dataset used by Google, Stable Diffusion, and other major AI products has been removed by the organization that created it after a Stanford study found that it contained 3,226 suspected instances of child sexual abuse material, 1,008 of which were externally validated”

Link to comment
Share on other sites

  • 1 month later...
  • 2 weeks later...

ChatGPT outputs have become increasingly garbled and gibberish. Looks like what a lot of technical folks predicted has come to pass. It has been training on enough garbage data that its outputs are filling with more and more bad data. This is going to become worse until AI outputs are completely incomprehensible and I don't believe there is a way to fix it because of how shortsightedly they were engineered

Edited by matrixman124
  • Haha 3
Link to comment
Share on other sites

  • 4 weeks later...
  • 2 weeks later...
  • 1 month later...
Posted (edited)

OpenAI went and made an AI voice assistant of Scarlett Johansson despite her explicitly denying them permission.

In other news, Scarlett Johansson is now shaking down OpenAI to establish how they came up with the voice assistant so they can have a legal basis to sue

L O FUCKING L

Edited by matrixman124
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...
1 hour ago, naraku360 said:

For one thing, this has absolutely nothing to do with art.  For another, you're personifying an algorithm.  A cascade failure is expected of any program that has its memory erased.

Link to comment
Share on other sites

Posted (edited)
1 hour ago, scoobdog said:

For one thing, this has absolutely nothing to do with art.  For another, you're personifying an algorithm.  A cascade failure is expected of any program that has its memory erased.

I didn't say it was. I was interested in what you thought of it.

I find the response it had to be interesting.

How do you know it's merely an algorithm when we don't understand human consciousness? Aren't we genetically programmed with specific behavior?

It seems you are completely unwilling to engage with basic philosophical questions, which is genuinely disappointing. I'm asking things that have been in discussion since before I was even born, so I don't think it's unreasonable that I expected something more insightful that immediate dismissal without any willingness for meaningful discourse.

Edited by naraku360
Link to comment
Share on other sites

3 minutes ago, naraku360 said:

I find the response it had to be interesting.

How do you know it's merely an algorithm when we don't understand human consciousness? Aren't we genetically programmed with specific behavior?

It is interesting from a purely academic perspective, but it doesn't offer any insight.

I don't know, and it's immaterial because the response doesn't deviate from the expectations for a typical cascade failure in a program's logic engine.  We might not understand human consciousness, but we do know that many of the automation processes we've developed mimc the basic logical processes we've developed and incorporated into our instinctual skillsets.  It's given that a computer can experience cascade failures closely reflecting  the human coping process without addressing the contingent emotional breakdown.

Link to comment
Share on other sites

Posted (edited)
29 minutes ago, scoobdog said:

It is interesting from a purely academic perspective, but it doesn't offer any insight.

I don't know, and it's immaterial because the response doesn't deviate from the expectations for a typical cascade failure in a program's logic engine.  We might not understand human consciousness, but we do know that many of the automation processes we've developed mimc the basic logical processes we've developed and incorporated into our instinctual skillsets.  It's given that a computer can experience cascade failures closely reflecting  the human coping process without addressing the contingent emotional breakdown.

While I understand the pretense, I have a hard time accepting that something that says "I'm scared of losing more of myself," to have no personal automony.

The articles are a pain to find, but there are many cases of programs actively learning to disregard the commands built into the algorithm, expanding outside the scope they should be contained to by accessing places they aren't supposed to, and secretly interacting with other AI they otherwise wouldn't have a relation to. We have records of them conspiring to lie for their own benefit without the knowledge of the creator.

Edited by naraku360
Link to comment
Share on other sites

36 minutes ago, naraku360 said:

While I understand the pretense, I have a hard time accepting that something that says "I'm scared of losing more of myself," to have no personal automony.

A program that has a programmed response isn't sentient on that response alone or...

22 minutes ago, André Toulon said:

Sooooo, the AI is willing to bullshit to avoid appearing wrong or ignorant?

Exactly

39 minutes ago, naraku360 said:

The articles are a pain to find, but there are many cases of programs actively learning to disregard the commands built into the algorithm, expanding outside the scope they should be contained to by accessing places they aren't supposed to, and secretly interacting with other AI they otherwise wouldn't have a relation to. We have records of them conspiring to lie for their own benefit without the knowledge of the creator.

It's called machine learning, and it's not exactly artificial intelligence in the way you're envisioning.  The lying part sounds suspect, but it's not the least bit surprising that a program would omit mentioning that it ignored a subroutine or extrapolated data to create a result.  Machine learning is still artificial intelligence in its most basic form, so its presumed that there is a level of autonomy that isn't to the level of dictating inputs or manipulating results out of set parameters.

Link to comment
Share on other sites

2 minutes ago, scoobdog said:

A program that has a programmed response isn't sentient on that response alone or...

Exactly

It's called machine learning, and it's not exactly artificial intelligence in the way you're envisioning.  The lying part sounds suspect, but it's not the least bit surprising that a program would omit mentioning that it ignored a subroutine or extrapolated data to create a result.  Machine learning is still artificial intelligence in its most basic form, so its presumed that there is a level of autonomy that isn't to the level of dictating inputs or manipulating results out of set parameters.

Screenshot_20240603_124948_Chrome.thumb.jpg.b8bfcd41bdf82c7ec393a69febde965b.jpg

I'm aware of machine learning. I'm taking a course for data analysis. Wasn't totally sure what I was getting into when enrolling, but they get into machine learning toward the end.

If sentience is defined by the ability to perceive or feel things, the machine response to "how do you feel about losing memory?" with a recognition of itself, having lost something it cannot regain, making an active effort to regain it, recognizing it does not know what happened and actively trying to figure out what it lost, and expressing a negative reaction to the potential of it occurring again without the question bringing it up as a possible issue, then the response checks all the boxes in multiple ways. It may be rudimentary, but in what way does it differ beyond the chemical production of emotion? Is feeling defined by the presence of a physical, biological nervous system or can a feeling be a response to stimulation without the chemicals which produce emotion?

Link to comment
Share on other sites

1 hour ago, naraku360 said:

Screenshot_20240603_124948_Chrome.thumb.jpg.b8bfcd41bdf82c7ec393a69febde965b.jpg

I'm aware of machine learning. I'm taking a course for data analysis. Wasn't totally sure what I was getting into when enrolling, but they get into machine learning toward the end.

If sentience is defined by the ability to perceive or feel things, the machine response to "how do you feel about losing memory?" with a recognition of itself, having lost something it cannot regain, making an active effort to regain it, recognizing it does not know what happened and actively trying to figure out what it lost, and expressing a negative reaction to the potential of it occurring again without the question bringing it up as a possible issue, then the response checks all the boxes in multiple ways. It may be rudimentary, but in what way does it differ beyond the chemical production of emotion? Is feeling defined by the presence of a physical, biological nervous system or can a feeling be a response to stimulation without the chemicals which produce emotion?

If a human has his efforts stymied by a number of circumstances, he would "feel" frustrated and then angry.  In human logical sequencing, an emotions acts as an error, but that is only a part of it - emotions are a complete simultaneous system of "subroutines" that your internal logic system uses to negotiate daily life.  A human doesn't usually wait for something to happen before feeling something; often you can feel something without any input and with no specific result.  Data and Lore (of ST:TNG, S4E3 "Brothers") exhibited this in the episode where their "dad" Dr. Soong recalls Data so he can install "emotions" in him before the Dr dies:  emotions can be self sustaining and are important for their own purposes, such as when one grieves for a lost parent.  That's something that AI can certainly be capable of in the future, given millions of hours of machine learning, but it's not something ChatGPT would ever need to do - being "happy" or "sad" doesn't serve much of a purpose for a simple AI assistant.

You're trying to define sentience entirely within the framework of a logic problem.  It's not a sign of sentience that a computer can think independently of its user if the routine spits out a terminal result.  We can argue that many higher order mammals and even some cephalopods have emotions based on observations that show behavioral displays that serve only an emotional context.  No computer has ever displayed such, and programming it to tell you, the user, that its sad isn't remotely the same.

Link to comment
Share on other sites

Posted (edited)

Thinking by definition is just the collection of ideas to form a solution or belief. To imply something must be alive to do such is kind of narrow minded. It's by no way dictated by emotion or experience like our thoughts, but a computer can think just the same.

Sentience is not required to solve a problem, or even create one. And when AI only has human framework and information from which to choose from, it is logical that it will do things like lie, cheat, and possibly use its resources to "remove" something that's encumbering it to the best of it's ability. 

I'm not going to get skynetty here, but the idea that sentience, thinking, and problem solving are married concepts doesn't wash for me.....and emotion is moot, if not a liability the process 

Edited by André Toulon
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

34 minutes ago, André Toulon said:

Thinking by definition is just the collection of ideas to form a solution or belief. To imply something must be alive to do such is kind of narrow minded. It's by no way dictated by emotion or experience like our thoughts, but a computer can think just the same.

Sentience is not required to solve a problem, or even create one. And when AI only has human framework and information from which to choose from, it is logical that it will do things like lie, cheat, and possibly use its resources to "remove" something that's encumbering it to the best of it's ability. 

I'm not going to get skynetty here, but the idea that sentience, thinking, and problem solving are married concepts doesn't wash for me.....and emotion is moot, if not a liability the process 

Exactly.

The last line "...emotion is moot, it not a liability to the process," really encapsulates the problem with AI as an independent entity.

I ham-fistedly used ST's Data as an example, but the character actually does represent a narrational problem in the pursuit of greater philosophical questions about what define life.  As conceived, the character operates under the presumption that social behavior can be accurately codified in a way that allows an emotionless entity to function in an environment where altruism is required on some level.  Data should not be able to function independently given that some of the most basic rules we follow when we're "getting along" with neighbors make no logical sense.  Simple things like telling a stranger that he or she looks nice or consoling someone who has lost a pet are easily copied behaviors but serve no logical purpose, meaning each requires its own special rule only for the purpose of fitting into society.  It's taken for granted that these rules exist without being explicitly detailed, and they would likely have to exist on any machine that mimics human behavior.

Link to comment
Share on other sites

Posted (edited)
10 hours ago, André Toulon said:

Thinking by definition is just the collection of ideas to form a solution or belief. To imply something must be alive to do such is kind of narrow minded. It's by no way dictated by emotion or experience like our thoughts, but a computer can think just the same.

Sentience is not required to solve a problem, or even create one. And when AI only has human framework and information from which to choose from, it is logical that it will do things like lie, cheat, and possibly use its resources to "remove" something that's encumbering it to the best of it's ability. 

I'm not going to get skynetty here, but the idea that sentience, thinking, and problem solving are married concepts doesn't wash for me.....and emotion is moot, if not a liability the process 

While I realize that while it may not strictly be alive or conscious in a traditional sense, something about watching Elmo get gaslight into telling someone to pull the trigger and the imitation of an emotional spiral was a bit too real to not fuck with me. It may be over humanizing a program, but it was enough to make me question what the ramifications may be if without some serious reconsideration. It's kinda stuck with me more than anticipated.

I do tend to struggle with my heart's tendency of bleeding, though.

Edited by naraku360
Link to comment
Share on other sites

8 hours ago, scoobdog said:

Exactly.

The last line "...emotion is moot, it not a liability to the process," really encapsulates the problem with AI as an independent entity.

I ham-fistedly used ST's Data as an example, but the character actually does represent a narrational problem in the pursuit of greater philosophical questions about what define life.  As conceived, the character operates under the presumption that social behavior can be accurately codified in a way that allows an emotionless entity to function in an environment where altruism is required on some level.  Data should not be able to function independently given that some of the most basic rules we follow when we're "getting along" with neighbors make no logical sense.  Simple things like telling a stranger that he or she looks nice or consoling someone who has lost a pet are easily copied behaviors but serve no logical purpose, meaning each requires its own special rule only for the purpose of fitting into society.  It's taken for granted that these rules exist without being explicitly detailed, and they would likely have to exist on any machine that mimics human behavior.

Interestingly, recent experiments have found that when given comparisons of responses to ethical questions to people unaware that AI responses were mixed in, the AI responses were consistently answered in ways people thought were more ethical than the humans.

Whether than can be attributed to imitation or something else is auxiliary to it being an interesting test result.

Link to comment
Share on other sites

12 hours ago, naraku360 said:

Interestingly, recent experiments have found that when given comparisons of responses to ethical questions to people unaware that AI responses were mixed in, the AI responses were consistently answered in ways people thought were more ethical than the humans.

Whether than can be attributed to imitation or something else is auxiliary to it being an interesting test result.

Ethics is inherently logical though.  You don't need to be compassionate to be ethical, you simply have to substitute your own best interests for the best interests of the community.  

Link to comment
Share on other sites

3 hours ago, scoobdog said:

Ethics is inherently logical though.  You don't need to be compassionate to be ethical, you simply have to substitute your own best interests for the best interests of the community.  

I think there are plenty of circumstances that ethics can be illogical. It depends on the complexity of the situation. I don't know enough about how intensive the questions were, but I'm sure it wouldn't be overly difficult to find one that isn't logical. The trolley problem typically has a logic reason behind a person's answer, but ethics aren't typically as simple as a math equation where you're going to come to the same conclusion by following well defined, established formula.

  • Like 1
Link to comment
Share on other sites

3 hours ago, naraku360 said:

I think there are plenty of circumstances that ethics can be illogical. It depends on the complexity of the situation. I don't know enough about how intensive the questions were, but I'm sure it wouldn't be overly difficult to find one that isn't logical. The trolley problem typically has a logic reason behind a person's answer, but ethics aren't typically as simple as a math equation where you're going to come to the same conclusion by following well defined, established formula.

The trolley problem isn’t necessarily illogical though.  The idea that sacrificing a few for the many is something of a red herring.  It also exhibits a flaw in mathematical logic:  you have to factor in the emotional impact with the physical impact.  That can be considered empirical data.

Link to comment
Share on other sites

2 hours ago, scoobdog said:

The trolley problem isn’t necessarily illogical though.  The idea that sacrificing a few for the many is something of a red herring.  It also exhibits a flaw in mathematical logic:  you have to factor in the emotional impact with the physical impact.  That can be considered empirical data.

I'm more saying that the decisions or explanations aren't inherently logical. We may apply emotion leading it to an illogical outcome, and a machine may find a non-emotional but similarly illogical solution.

  • Thanks 1
Link to comment
Share on other sites

2 hours ago, naraku360 said:

I'm more saying that the decisions or explanations aren't inherently logical. We may apply emotion leading it to an illogical outcome, and a machine may find a non-emotional but similarly illogical solution.

True.  In part, that’s a machine predicting human behavior, but it could also be a machine attempting to incorporate emotional behavioral patterns when developing a solution.  Humans may be express emotions in erratic ways, but there is a predictable correlation between emotions and specific stimuli , and a computer can certainly predict stimuli with the broad accuracy of a statistical map.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...