scoobdog Posted November 28, 2023 Share Posted November 28, 2023 21 minutes ago, matrixman124 said: Adobe is selling AI generated stock images of the war in Gaza https://petapixel.com/2023/11/07/adobe-stock-is-selling-ai-generated-images-of-the-israel-hamas-conflict/ That’s well past alarming. 2 Quote Link to comment Share on other sites More sharing options...
discolé monade Posted November 29, 2023 Share Posted November 29, 2023 AI is now rewriting how we live, work, and play; if we get it right, the future looks remarkable. But we must get serious about expunging systemic bias and racism from these platforms before it's too late. 1 Quote Link to comment Share on other sites More sharing options...
Icarus27k Posted December 8, 2023 Share Posted December 8, 2023 I read a column recently from a writer who, like me, is skeptical of AI. I remember agreeing with him a lot, but the only specific point he made that I remember was a dumb one. He was expressing disappointment that AI will never live up to the expectations people have for it, and then he said something like: "on the plus side, we'll never be in danger from out of control AI, but on the negative side, we'll never have robot companions that we can befriend and love." Aren't there plenty of biological lifeforms that already exist that can be our friends and show us love? Why do we have to create an entire new being to fill that need? 1 Quote Link to comment Share on other sites More sharing options...
rpgamer Posted December 8, 2023 Share Posted December 8, 2023 Because slavery is frowned upon. 1 Quote Link to comment Share on other sites More sharing options...
André Toulon Posted December 8, 2023 Share Posted December 8, 2023 38 minutes ago, Icarus27k said: Aren't there plenty of biological lifeforms that already exist that can be our friends and show us love? Why do we have to create an entire new being to fill that need? Humans suck and animals tolerate us....the idea is to have something totally subservient...like anime waifus 2 Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted December 9, 2023 Author Share Posted December 9, 2023 I'm seeing some cases where Elon's new AI Grok is either too biased towards the right, too biased towards the left, and also seems to rip off OpenAI's code 1 Quote Link to comment Share on other sites More sharing options...
katt_goddess Posted December 9, 2023 Share Posted December 9, 2023 22 hours ago, Icarus27k said: He was expressing disappointment that AI will never live up to the expectations people have for it, and then he said something like: "on the plus side, we'll never be in danger from out of control AI, but on the negative side, we'll never have robot companions that we can befriend and love." Say that you've attempted to stick your dick in a Furbe without actually saying those exact words... 4 Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted December 12, 2023 Author Share Posted December 12, 2023 (edited) https://www.cbr.com/bleach-tybw-anime-director-ai-replace-animators/ Basically they're blaming irresponsible production and wasteful direction for causing a lot of the financial problems in making anime which is fine. But then they start saying AI will help replace "lazy animators". That's not how that works and it's very fucked up Edited December 12, 2023 by matrixman124 Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted December 20, 2023 Author Share Posted December 20, 2023 https://www.404media.co/laion-datasets-removed-stanford-csam-child-abuse/?ref=daily-stories-newsletter “The LAION-5B machine learning dataset used by Google, Stable Diffusion, and other major AI products has been removed by the organization that created it after a Stanford study found that it contained 3,226 suspected instances of child sexual abuse material, 1,008 of which were externally validated” Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted December 21, 2023 Author Share Posted December 21, 2023 https://www.animenewsnetwork.com/news/2023-12-21/the-ancient-magus-bride-manga-return-gets-simultaneous-english-release-using-ai-translation/.205774 First step to putting translators out of work. Quote Link to comment Share on other sites More sharing options...
katt_goddess Posted December 21, 2023 Share Posted December 21, 2023 20 minutes ago, matrixman124 said: https://www.animenewsnetwork.com/news/2023-12-21/the-ancient-magus-bride-manga-return-gets-simultaneous-english-release-using-ai-translation/.205774 First step to putting translators out of work. I've seen Google translator. We should be safe for a very long time. Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted December 21, 2023 Author Share Posted December 21, 2023 52 minutes ago, katt_goddess said: I've seen Google translator. We should be safe for a very long time. I think it could become an industry standard pretty quick if people are okay with the translation quality Quote Link to comment Share on other sites More sharing options...
katt_goddess Posted December 21, 2023 Share Posted December 21, 2023 9 minutes ago, matrixman124 said: I think it could become an industry standard pretty quick if people are okay with the translation quality 'I would like to try to high school, do sports.' 'She has very bigly lady chest area.' 'Stay offing, the tiny grass sleeps now.' rollo rollo rollo rollo candy 1 Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted December 22, 2023 Author Share Posted December 22, 2023 1 hour ago, katt_goddess said: 'I would like to try to high school, do sports.' 'She has very bigly lady chest area.' 'Stay offing, the tiny grass sleeps now.' rollo rollo rollo rollo candy I think the readers need for instant gratification could make them tolerate the nonsense 😔 1 Quote Link to comment Share on other sites More sharing options...
katt_goddess Posted December 22, 2023 Share Posted December 22, 2023 28 minutes ago, matrixman124 said: I think the readers need for instant gratification could make them tolerate the nonsense 😔 A real reader would learn to read the squiggles as nature intended! Seeing 'Weird Al Yankovic' written in a Japanese manga is hilarious. 1 Quote Link to comment Share on other sites More sharing options...
Icarus27k Posted December 26, 2023 Share Posted December 26, 2023 4 Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted December 27, 2023 Author Share Posted December 27, 2023 21 hours ago, Icarus27k said: Sam Altman what are you doing here 1 Quote Link to comment Share on other sites More sharing options...
lupin_bebop Posted February 7 Share Posted February 7 On 11/29/2023 at 4:50 PM, discolé monade said: AI is now rewriting how we live, work, and play; if we get it right, the future looks remarkable. But we must get serious about expunging systemic bias and racism from these platforms before it's too late. Yeah.....that's about White. 😅 Quote Link to comment Share on other sites More sharing options...
Jman Posted February 15 Share Posted February 15 Not content with junk images, OpenAI is introducing a text to video creator. Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted February 21 Author Share Posted February 21 (edited) ChatGPT outputs have become increasingly garbled and gibberish. Looks like what a lot of technical folks predicted has come to pass. It has been training on enough garbage data that its outputs are filling with more and more bad data. This is going to become worse until AI outputs are completely incomprehensible and I don't believe there is a way to fix it because of how shortsightedly they were engineered Edited February 21 by matrixman124 3 Quote Link to comment Share on other sites More sharing options...
Jman Posted February 21 Share Posted February 21 4 Quote Link to comment Share on other sites More sharing options...
katt_goddess Posted February 22 Share Posted February 22 Yeah, I’m a pretty, little flower. Like a prom date, maybe? Enjoy the silence, are you for supper? Turtles. Now let’s go talk about little, breaded chicken fingers. 1 Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted March 20 Author Share Posted March 20 Midjourney and Stability are fighting over the latter stealing from the former LOL https://80.lv/articles/midjourney-accuses-stability-ai-of-theft-bans-its-employees/ 1 Quote Link to comment Share on other sites More sharing options...
André Toulon Posted March 20 Share Posted March 20 (edited) Nevermind, I gotta stop coming here first thing in the morning...I'm trolling a dude that's not even here. Edited March 20 by André Toulon 2 Quote Link to comment Share on other sites More sharing options...
Icarus27k Posted March 28 Share Posted March 28 You know, just like Oppenheimer. 4 Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted May 21 Author Share Posted May 21 (edited) OpenAI went and made an AI voice assistant of Scarlett Johansson despite her explicitly denying them permission. In other news, Scarlett Johansson is now shaking down OpenAI to establish how they came up with the voice assistant so they can have a legal basis to sue L O FUCKING L Edited May 21 by matrixman124 1 1 Quote Link to comment Share on other sites More sharing options...
scoobdog Posted June 3 Share Posted June 3 1 hour ago, naraku360 said: https://medium.com/@RuthHouston2/microsoft-bing-chatbot-loses-memory-becomes-totally-distraught-5ebcd9d0a5af @scoobdog I'm curious what your thoughts on this would be. For one thing, this has absolutely nothing to do with art. For another, you're personifying an algorithm. A cascade failure is expected of any program that has its memory erased. Quote Link to comment Share on other sites More sharing options...
naraku360 Posted June 3 Share Posted June 3 (edited) 1 hour ago, scoobdog said: For one thing, this has absolutely nothing to do with art. For another, you're personifying an algorithm. A cascade failure is expected of any program that has its memory erased. I didn't say it was. I was interested in what you thought of it. I find the response it had to be interesting. How do you know it's merely an algorithm when we don't understand human consciousness? Aren't we genetically programmed with specific behavior? It seems you are completely unwilling to engage with basic philosophical questions, which is genuinely disappointing. I'm asking things that have been in discussion since before I was even born, so I don't think it's unreasonable that I expected something more insightful that immediate dismissal without any willingness for meaningful discourse. Edited June 3 by naraku360 Quote Link to comment Share on other sites More sharing options...
scoobdog Posted June 3 Share Posted June 3 3 minutes ago, naraku360 said: I find the response it had to be interesting. How do you know it's merely an algorithm when we don't understand human consciousness? Aren't we genetically programmed with specific behavior? It is interesting from a purely academic perspective, but it doesn't offer any insight. I don't know, and it's immaterial because the response doesn't deviate from the expectations for a typical cascade failure in a program's logic engine. We might not understand human consciousness, but we do know that many of the automation processes we've developed mimc the basic logical processes we've developed and incorporated into our instinctual skillsets. It's given that a computer can experience cascade failures closely reflecting the human coping process without addressing the contingent emotional breakdown. Quote Link to comment Share on other sites More sharing options...
naraku360 Posted June 3 Share Posted June 3 (edited) 29 minutes ago, scoobdog said: It is interesting from a purely academic perspective, but it doesn't offer any insight. I don't know, and it's immaterial because the response doesn't deviate from the expectations for a typical cascade failure in a program's logic engine. We might not understand human consciousness, but we do know that many of the automation processes we've developed mimc the basic logical processes we've developed and incorporated into our instinctual skillsets. It's given that a computer can experience cascade failures closely reflecting the human coping process without addressing the contingent emotional breakdown. While I understand the pretense, I have a hard time accepting that something that says "I'm scared of losing more of myself," to have no personal automony. The articles are a pain to find, but there are many cases of programs actively learning to disregard the commands built into the algorithm, expanding outside the scope they should be contained to by accessing places they aren't supposed to, and secretly interacting with other AI they otherwise wouldn't have a relation to. We have records of them conspiring to lie for their own benefit without the knowledge of the creator. Edited June 3 by naraku360 Quote Link to comment Share on other sites More sharing options...
André Toulon Posted June 3 Share Posted June 3 Sooooo, the AI is willing to bullshit to avoid appearing wrong or ignorant? Quote Link to comment Share on other sites More sharing options...
naraku360 Posted June 3 Share Posted June 3 21 minutes ago, André Toulon said: Sooooo, the AI is willing to bullshit to avoid appearing wrong or ignorant? https://theconversation.com/ai-systems-have-learned-how-to-deceive-humans-what-does-that-mean-for-our-future-212197 Quote Link to comment Share on other sites More sharing options...
scoobdog Posted June 3 Share Posted June 3 36 minutes ago, naraku360 said: While I understand the pretense, I have a hard time accepting that something that says "I'm scared of losing more of myself," to have no personal automony. A program that has a programmed response isn't sentient on that response alone or... 22 minutes ago, André Toulon said: Sooooo, the AI is willing to bullshit to avoid appearing wrong or ignorant? Exactly 39 minutes ago, naraku360 said: The articles are a pain to find, but there are many cases of programs actively learning to disregard the commands built into the algorithm, expanding outside the scope they should be contained to by accessing places they aren't supposed to, and secretly interacting with other AI they otherwise wouldn't have a relation to. We have records of them conspiring to lie for their own benefit without the knowledge of the creator. It's called machine learning, and it's not exactly artificial intelligence in the way you're envisioning. The lying part sounds suspect, but it's not the least bit surprising that a program would omit mentioning that it ignored a subroutine or extrapolated data to create a result. Machine learning is still artificial intelligence in its most basic form, so its presumed that there is a level of autonomy that isn't to the level of dictating inputs or manipulating results out of set parameters. Quote Link to comment Share on other sites More sharing options...
André Toulon Posted June 3 Share Posted June 3 Does it actually matter if it's lies don't formulate the same we that ours do if the end result is exactly the same? Quote Link to comment Share on other sites More sharing options...
naraku360 Posted June 3 Share Posted June 3 2 minutes ago, scoobdog said: A program that has a programmed response isn't sentient on that response alone or... Exactly It's called machine learning, and it's not exactly artificial intelligence in the way you're envisioning. The lying part sounds suspect, but it's not the least bit surprising that a program would omit mentioning that it ignored a subroutine or extrapolated data to create a result. Machine learning is still artificial intelligence in its most basic form, so its presumed that there is a level of autonomy that isn't to the level of dictating inputs or manipulating results out of set parameters. I'm aware of machine learning. I'm taking a course for data analysis. Wasn't totally sure what I was getting into when enrolling, but they get into machine learning toward the end. If sentience is defined by the ability to perceive or feel things, the machine response to "how do you feel about losing memory?" with a recognition of itself, having lost something it cannot regain, making an active effort to regain it, recognizing it does not know what happened and actively trying to figure out what it lost, and expressing a negative reaction to the potential of it occurring again without the question bringing it up as a possible issue, then the response checks all the boxes in multiple ways. It may be rudimentary, but in what way does it differ beyond the chemical production of emotion? Is feeling defined by the presence of a physical, biological nervous system or can a feeling be a response to stimulation without the chemicals which produce emotion? Quote Link to comment Share on other sites More sharing options...
Raptorpat Posted June 3 Share Posted June 3 someone should get sponges back because this philosophical question was his jam Quote Link to comment Share on other sites More sharing options...
matrixman124 Posted June 3 Author Share Posted June 3 Looks like OpenAI made a deal with Apple to integrate ChatGPT into their products. Microsoft must be a little miffed. Quote Link to comment Share on other sites More sharing options...
scoobdog Posted June 3 Share Posted June 3 1 hour ago, naraku360 said: I'm aware of machine learning. I'm taking a course for data analysis. Wasn't totally sure what I was getting into when enrolling, but they get into machine learning toward the end. If sentience is defined by the ability to perceive or feel things, the machine response to "how do you feel about losing memory?" with a recognition of itself, having lost something it cannot regain, making an active effort to regain it, recognizing it does not know what happened and actively trying to figure out what it lost, and expressing a negative reaction to the potential of it occurring again without the question bringing it up as a possible issue, then the response checks all the boxes in multiple ways. It may be rudimentary, but in what way does it differ beyond the chemical production of emotion? Is feeling defined by the presence of a physical, biological nervous system or can a feeling be a response to stimulation without the chemicals which produce emotion? If a human has his efforts stymied by a number of circumstances, he would "feel" frustrated and then angry. In human logical sequencing, an emotions acts as an error, but that is only a part of it - emotions are a complete simultaneous system of "subroutines" that your internal logic system uses to negotiate daily life. A human doesn't usually wait for something to happen before feeling something; often you can feel something without any input and with no specific result. Data and Lore (of ST:TNG, S4E3 "Brothers") exhibited this in the episode where their "dad" Dr. Soong recalls Data so he can install "emotions" in him before the Dr dies: emotions can be self sustaining and are important for their own purposes, such as when one grieves for a lost parent. That's something that AI can certainly be capable of in the future, given millions of hours of machine learning, but it's not something ChatGPT would ever need to do - being "happy" or "sad" doesn't serve much of a purpose for a simple AI assistant. You're trying to define sentience entirely within the framework of a logic problem. It's not a sign of sentience that a computer can think independently of its user if the routine spits out a terminal result. We can argue that many higher order mammals and even some cephalopods have emotions based on observations that show behavioral displays that serve only an emotional context. No computer has ever displayed such, and programming it to tell you, the user, that its sad isn't remotely the same. Quote Link to comment Share on other sites More sharing options...
katt_goddess Posted June 4 Share Posted June 4 2 Quote Link to comment Share on other sites More sharing options...
discolé monade Posted June 4 Share Posted June 4 oh THIS thread. Quote Link to comment Share on other sites More sharing options...
André Toulon Posted June 4 Share Posted June 4 (edited) Thinking by definition is just the collection of ideas to form a solution or belief. To imply something must be alive to do such is kind of narrow minded. It's by no way dictated by emotion or experience like our thoughts, but a computer can think just the same. Sentience is not required to solve a problem, or even create one. And when AI only has human framework and information from which to choose from, it is logical that it will do things like lie, cheat, and possibly use its resources to "remove" something that's encumbering it to the best of it's ability. I'm not going to get skynetty here, but the idea that sentience, thinking, and problem solving are married concepts doesn't wash for me.....and emotion is moot, if not a liability the process Edited June 4 by André Toulon 2 1 Quote Link to comment Share on other sites More sharing options...
scoobdog Posted June 4 Share Posted June 4 34 minutes ago, André Toulon said: Thinking by definition is just the collection of ideas to form a solution or belief. To imply something must be alive to do such is kind of narrow minded. It's by no way dictated by emotion or experience like our thoughts, but a computer can think just the same. Sentience is not required to solve a problem, or even create one. And when AI only has human framework and information from which to choose from, it is logical that it will do things like lie, cheat, and possibly use its resources to "remove" something that's encumbering it to the best of it's ability. I'm not going to get skynetty here, but the idea that sentience, thinking, and problem solving are married concepts doesn't wash for me.....and emotion is moot, if not a liability the process Exactly. The last line "...emotion is moot, it not a liability to the process," really encapsulates the problem with AI as an independent entity. I ham-fistedly used ST's Data as an example, but the character actually does represent a narrational problem in the pursuit of greater philosophical questions about what define life. As conceived, the character operates under the presumption that social behavior can be accurately codified in a way that allows an emotionless entity to function in an environment where altruism is required on some level. Data should not be able to function independently given that some of the most basic rules we follow when we're "getting along" with neighbors make no logical sense. Simple things like telling a stranger that he or she looks nice or consoling someone who has lost a pet are easily copied behaviors but serve no logical purpose, meaning each requires its own special rule only for the purpose of fitting into society. It's taken for granted that these rules exist without being explicitly detailed, and they would likely have to exist on any machine that mimics human behavior. Quote Link to comment Share on other sites More sharing options...
naraku360 Posted June 5 Share Posted June 5 (edited) 10 hours ago, André Toulon said: Thinking by definition is just the collection of ideas to form a solution or belief. To imply something must be alive to do such is kind of narrow minded. It's by no way dictated by emotion or experience like our thoughts, but a computer can think just the same. Sentience is not required to solve a problem, or even create one. And when AI only has human framework and information from which to choose from, it is logical that it will do things like lie, cheat, and possibly use its resources to "remove" something that's encumbering it to the best of it's ability. I'm not going to get skynetty here, but the idea that sentience, thinking, and problem solving are married concepts doesn't wash for me.....and emotion is moot, if not a liability the process While I realize that while it may not strictly be alive or conscious in a traditional sense, something about watching Elmo get gaslight into telling someone to pull the trigger and the imitation of an emotional spiral was a bit too real to not fuck with me. It may be over humanizing a program, but it was enough to make me question what the ramifications may be if without some serious reconsideration. It's kinda stuck with me more than anticipated. I do tend to struggle with my heart's tendency of bleeding, though. Edited June 5 by naraku360 Quote Link to comment Share on other sites More sharing options...
naraku360 Posted June 5 Share Posted June 5 8 hours ago, scoobdog said: Exactly. The last line "...emotion is moot, it not a liability to the process," really encapsulates the problem with AI as an independent entity. I ham-fistedly used ST's Data as an example, but the character actually does represent a narrational problem in the pursuit of greater philosophical questions about what define life. As conceived, the character operates under the presumption that social behavior can be accurately codified in a way that allows an emotionless entity to function in an environment where altruism is required on some level. Data should not be able to function independently given that some of the most basic rules we follow when we're "getting along" with neighbors make no logical sense. Simple things like telling a stranger that he or she looks nice or consoling someone who has lost a pet are easily copied behaviors but serve no logical purpose, meaning each requires its own special rule only for the purpose of fitting into society. It's taken for granted that these rules exist without being explicitly detailed, and they would likely have to exist on any machine that mimics human behavior. Interestingly, recent experiments have found that when given comparisons of responses to ethical questions to people unaware that AI responses were mixed in, the AI responses were consistently answered in ways people thought were more ethical than the humans. Whether than can be attributed to imitation or something else is auxiliary to it being an interesting test result. Quote Link to comment Share on other sites More sharing options...
scoobdog Posted June 5 Share Posted June 5 12 hours ago, naraku360 said: Interestingly, recent experiments have found that when given comparisons of responses to ethical questions to people unaware that AI responses were mixed in, the AI responses were consistently answered in ways people thought were more ethical than the humans. Whether than can be attributed to imitation or something else is auxiliary to it being an interesting test result. Ethics is inherently logical though. You don't need to be compassionate to be ethical, you simply have to substitute your own best interests for the best interests of the community. Quote Link to comment Share on other sites More sharing options...
naraku360 Posted June 5 Share Posted June 5 3 hours ago, scoobdog said: Ethics is inherently logical though. You don't need to be compassionate to be ethical, you simply have to substitute your own best interests for the best interests of the community. I think there are plenty of circumstances that ethics can be illogical. It depends on the complexity of the situation. I don't know enough about how intensive the questions were, but I'm sure it wouldn't be overly difficult to find one that isn't logical. The trolley problem typically has a logic reason behind a person's answer, but ethics aren't typically as simple as a math equation where you're going to come to the same conclusion by following well defined, established formula. 1 Quote Link to comment Share on other sites More sharing options...
scoobdog Posted June 6 Share Posted June 6 3 hours ago, naraku360 said: I think there are plenty of circumstances that ethics can be illogical. It depends on the complexity of the situation. I don't know enough about how intensive the questions were, but I'm sure it wouldn't be overly difficult to find one that isn't logical. The trolley problem typically has a logic reason behind a person's answer, but ethics aren't typically as simple as a math equation where you're going to come to the same conclusion by following well defined, established formula. The trolley problem isn’t necessarily illogical though. The idea that sacrificing a few for the many is something of a red herring. It also exhibits a flaw in mathematical logic: you have to factor in the emotional impact with the physical impact. That can be considered empirical data. Quote Link to comment Share on other sites More sharing options...
katt_goddess Posted June 6 Share Posted June 6 The trolley problem is solved by moving the switch to the halfway point, causing the trolley to derail. There. I just saved you 2 boobless hours. 3 Quote Link to comment Share on other sites More sharing options...
naraku360 Posted June 6 Share Posted June 6 2 hours ago, scoobdog said: The trolley problem isn’t necessarily illogical though. The idea that sacrificing a few for the many is something of a red herring. It also exhibits a flaw in mathematical logic: you have to factor in the emotional impact with the physical impact. That can be considered empirical data. I'm more saying that the decisions or explanations aren't inherently logical. We may apply emotion leading it to an illogical outcome, and a machine may find a non-emotional but similarly illogical solution. 1 Quote Link to comment Share on other sites More sharing options...
scoobdog Posted June 6 Share Posted June 6 2 hours ago, naraku360 said: I'm more saying that the decisions or explanations aren't inherently logical. We may apply emotion leading it to an illogical outcome, and a machine may find a non-emotional but similarly illogical solution. True. In part, that’s a machine predicting human behavior, but it could also be a machine attempting to incorporate emotional behavioral patterns when developing a solution. Humans may be express emotions in erratic ways, but there is a predictable correlation between emotions and specific stimuli , and a computer can certainly predict stimuli with the broad accuracy of a statistical map. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.