I wonder what Lee & other math Wizards think of this?
John Why AIs that tackle complex maths could be the next big breakthroughMathematical AIs show machine intelligence may emerge from unexpected pursuits Read in New Scientist: Shared from
|
On Apr 14, 2024, at 6:42?AM, John Robinson via groups.io <profilecovenant@...> wrote: I wonder what Lee & other math Wizards think of this?
John Why AIs that tackle complex maths could be the next big breakthroughMathematical AIs show machine intelligence may emerge from unexpected pursuits Read in New Scientist:?
I’ve been thinking about writing about AI here ever since February 3, when John posted the following:
Harry, I’m a guy that listens to hundreds of Audio Books, many Biographies, Autobiographies. ?Steve Jobs, Elon Musk, Charlie Munger, Ray Dalio, Tony Robbins and on and on it goes. ?I also listen to a plethora of investment works. ?I’VE NEVER HAD ONE TOTALLY CHANGE MY THINKING like “The Coming WAVE” by Mustafa Suleyman.
I added that book to my reading list, and finally finished it in the middle of March. I’ve been mulling over it and several other ones ever since.
To greatly oversimplify, right now what we call AI is really a huge pattern search engine. We feed it a bunch of data and it correlates the data into a big collection of connected nodes. It is also programmed with rules that tell it how to traverse the nodes. With a huge set of input data and a rich rule set, there might be many ways to traverse the graph. Given a command, it chooses the path its rules deem most likely to result in a correct result. This process can require a huge amount of computing, if the data set is large and the rules are complex.
In mathematics, the computer is given a data set of known results in an area. For example, it might be given a huge collection of algebra equations and the steps to their solutions. It’s also given the rules of algebra and logic. When you ask it to solve a new equation, it will look for something in its dataset that is similar the new equation. Since the thing in its dataset has a well-defined solution, it tries to use the same steps to solve the new equation. Programs such as Mathematica, Maple, and SageMath?have been doing this for many years. They can solve any equation from a high school algebra class and much more.
The problem that comes up is when the problem can't be twisted into something similar to something else in the data set, the program is stuck.
All the examples I know of where AI systems solved difficult math problems are similar to what I described above, except the training data sets are many examples of proofs, and the rules are complicated. With a diverse enough data set and a rich set of rules, the program might very well be able to construct tricky proofs of difficult problems.
In fact, quite a few years ago I read an article in Math Intelligencer (I think??) about a computer program that was taught the basic facts of Euclidean geometry, which form a nice and fairly simple closed system. The programmers just told it to start proving facts by wandering around its database and following the logic rules it was programmed to use. The result was hundreds of “theorems”. Few of them were interesting and the interesting ones were already known—with a couple of exceptions. It did manage to find some new proofs of well-known theorems.
The moral to this story is that the computer didn’t really come up with anything new, because it didn’t have any new ideas, and this is the weakness with the AI mathematical proving programs. The great mathematicians are great because they came up with new ideas. The names we remember are Newton, Euler, Gauss, Cantor, Noether, Neumann, Turing, and many others, because they didn’t just just prove hard things, they came up with unprecedented methods and ideas.
Today’s AI programs are showing uncanny technical virtuosity in many areas, but they aren’t yet showing originality.
Don’t think what I’ve written means I think AI is pure hype and just another tech bubble. It is doing some seemingly magical things, particularly in medicine, where it is already better than almost all doctors in spotting some heart problems and breast cancer. In both of these cases, the AI diagnosis is often earlier and more accurate than standard diagnoses, and the experts don’t really understand what it’s seeing.
Returning at last to the Suleyman book, I do think a lot of it is hype coming from an AI entrepreneur. He does a lot of hand-waving and has few details. I do believe his claim AI will have a profound effect in biology because it’s the only way we know to make sense of the large databases of proteins and DNA. I am skeptical of his claims about the imminence of AGI (= artificial general intelligence).
L^2
The good Christian should beware of mathematicians and all those who make empty prophecies. The danger already exists that mathematicians have made a covenant with the devil to darken the spirit and confine man in the bonds of Hell.?—?Augustinus
|
Thank you Lee for your in depth analysis. What are your thoughts on the use of Ai on deadly biological weapons by bad actors? ?By Ai starting to think & making their own decisions??
I don’t know if you watched Musk demonstration of 3 Robots demonstrating the bullet proof Cyber truck? ?Pretty cool unless a fake. ?Supposedly on another occasion one of the robots disagreed with a human & became aggressive with a human’s arm. ?Musk has been one of the largest proponents of slowing all this down & having government restrictions & safeguards on development, to protect humans. Few seem to agree.?
John
toggle quoted message
Show quoted text
On Apr 15, 2024, at 11:28?PM, Lee Larson via groups.io <leelarson@...> wrote:
? On Apr 14, 2024, at 6:42?AM, John Robinson via groups.io <profilecovenant@...> wrote: I wonder what Lee & other math Wizards think of this?
John Why AIs that tackle complex maths could be the next big breakthroughMathematical AIs show machine intelligence may emerge from unexpected pursuits Read in New Scientist:?
I’ve been thinking about writing about AI here ever since February 3, when John posted the following:
Harry, I’m a guy that listens to hundreds of Audio Books, many Biographies, Autobiographies. ?Steve Jobs, Elon Musk, Charlie Munger, Ray Dalio, Tony Robbins and on and on it goes. ?I also listen to a plethora of investment works. ?I’VE NEVER HAD ONE TOTALLY CHANGE MY THINKING like “The Coming WAVE” by Mustafa Suleyman.
I added that book to my reading list, and finally finished it in the middle of March. I’ve been mulling over it and several other ones ever since.
To greatly oversimplify, right now what we call AI is really a huge pattern search engine. We feed it a bunch of data and it correlates the data into a big collection of connected nodes. It is also programmed with rules that tell it how to traverse the nodes. With a huge set of input data and a rich rule set, there might be many ways to traverse the graph. Given a command, it chooses the path its rules deem most likely to result in a correct result. This process can require a huge amount of computing, if the data set is large and the rules are complex.
In mathematics, the computer is given a data set of known results in an area. For example, it might be given a huge collection of algebra equations and the steps to their solutions. It’s also given the rules of algebra and logic. When you ask it to solve a new equation, it will look for something in its dataset that is similar the new equation. Since the thing in its dataset has a well-defined solution, it tries to use the same steps to solve the new equation. Programs such as Mathematica, Maple, and SageMath?have been doing this for many years. They can solve any equation from a high school algebra class and much more.
The problem that comes up is when the problem can't be twisted into something similar to something else in the data set, the program is stuck.
All the examples I know of where AI systems solved difficult math problems are similar to what I described above, except the training data sets are many examples of proofs, and the rules are complicated. With a diverse enough data set and a rich set of rules, the program might very well be able to construct tricky proofs of difficult problems.
In fact, quite a few years ago I read an article in Math Intelligencer (I think??) about a computer program that was taught the basic facts of Euclidean geometry, which form a nice and fairly simple closed system. The programmers just told it to start proving facts by wandering around its database and following the logic rules it was programmed to use. The result was hundreds of “theorems”. Few of them were interesting and the interesting ones were already known—with a couple of exceptions. It did manage to find some new proofs of well-known theorems.
The moral to this story is that the computer didn’t really come up with anything new, because it didn’t have any new ideas, and this is the weakness with the AI mathematical proving programs. The great mathematicians are great because they came up with new ideas. The names we remember are Newton, Euler, Gauss, Cantor, Noether, Neumann, Turing, and many others, because they didn’t just just prove hard things, they came up with unprecedented methods and ideas.
Today’s AI programs are showing uncanny technical virtuosity in many areas, but they aren’t yet showing originality.
Don’t think what I’ve written means I think AI is pure hype and just another tech bubble. It is doing some seemingly magical things, particularly in medicine, where it is already better than almost all doctors in spotting some heart problems and breast cancer. In both of these cases, the AI diagnosis is often earlier and more accurate than standard diagnoses, and the experts don’t really understand what it’s seeing.
Returning at last to the Suleyman book, I do think a lot of it is hype coming from an AI entrepreneur. He does a lot of hand-waving and has few details. I do believe his claim AI will have a profound effect in biology because it’s the only way we know to make sense of the large databases of proteins and DNA. I am skeptical of his claims about the imminence of AGI (= artificial general intelligence).
L^2
The good Christian should beware of mathematicians and all those who make empty prophecies. The danger already exists that mathematicians have made a covenant with the devil to darken the spirit and confine man in the bonds of Hell.?—?Augustinus
|
My guess with Musk is that he would like it to slow down simply because he's behind in the race. His AI company started only last July.
Bill
toggle quoted message
Show quoted text
On Apr 16, 2024, at 10:11, John Robinson via groups.io <profilecovenant@...> wrote:
Thank you Lee for your in depth analysis. What are your thoughts on the use of Ai on deadly biological weapons by bad actors? By Ai starting to think & making their own decisions?
I don’t know if you watched Musk demonstration of 3 Robots demonstrating the bullet proof Cyber truck? Pretty cool unless a fake. Supposedly on another occasion one of the robots disagreed with a human & became aggressive with a human’s arm. Musk has been one of the largest proponents of slowing all this down & having government restrictions & safeguards on development, to protect humans. Few seem to agree.
John
On Apr 15, 2024, at 11:28?PM, Lee Larson via groups.io <leelarson@...> wrote:
?On Apr 14, 2024, at 6:42?AM, John Robinson via groups.io <profilecovenant@...> wrote:
I wonder what Lee & other math Wizards think of this?
John
Why AIs that tackle complex maths could be the next big breakthrough Mathematical AIs show machine intelligence may emerge from unexpected pursuits Read in New Scientist: I’ve been thinking about writing about AI here ever since February 3, when John posted the following:
Harry, I’m a guy that listens to hundreds of Audio Books, many Biographies, Autobiographies. Steve Jobs, Elon Musk, Charlie Munger, Ray Dalio, Tony Robbins and on and on it goes. I also listen to a plethora of investment works. I’VE NEVER HAD ONE TOTALLY CHANGE MY THINKING like “The Coming WAVE” by Mustafa Suleyman.
I added that book to my reading list, and finally finished it in the middle of March. I’ve been mulling over it and several other ones ever since.
To greatly oversimplify, right now what we call AI is really a huge pattern search engine. We feed it a bunch of data and it correlates the data into a big collection of connected nodes. It is also programmed with rules that tell it how to traverse the nodes. With a huge set of input data and a rich rule set, there might be many ways to traverse the graph. Given a command, it chooses the path its rules deem most likely to result in a correct result. This process can require a huge amount of computing, if the data set is large and the rules are complex.
In mathematics, the computer is given a data set of known results in an area. For example, it might be given a huge collection of algebra equations and the steps to their solutions. It’s also given the rules of algebra and logic. When you ask it to solve a new equation, it will look for something in its dataset that is similar the new equation. Since the thing in its dataset has a well-defined solution, it tries to use the same steps to solve the new equation. Programs such as Mathematica, Maple, and SageMath have been doing this for many years. They can solve any equation from a high school algebra class and much more.
The problem that comes up is when the problem can't be twisted into something similar to something else in the data set, the program is stuck.
All the examples I know of where AI systems solved difficult math problems are similar to what I described above, except the training data sets are many examples of proofs, and the rules are complicated. With a diverse enough data set and a rich set of rules, the program might very well be able to construct tricky proofs of difficult problems.
In fact, quite a few years ago I read an article in Math Intelligencer (I think??) about a computer program that was taught the basic facts of Euclidean geometry, which form a nice and fairly simple closed system. The programmers just told it to start proving facts by wandering around its database and following the logic rules it was programmed to use. The result was hundreds of “theorems”. Few of them were interesting and the interesting ones were already known—with a couple of exceptions. It did manage to find some new proofs of well-known theorems.
The moral to this story is that the computer didn’t really come up with anything new, because it didn’t have any new ideas, and this is the weakness with the AI mathematical proving programs. The great mathematicians are great because they came up with new ideas. The names we remember are Newton, Euler, Gauss, Cantor, Noether, Neumann, Turing, and many others, because they didn’t just just prove hard things, they came up with unprecedented methods and ideas.
Today’s AI programs are showing uncanny technical virtuosity in many areas, but they aren’t yet showing originality.
Don’t think what I’ve written means I think AI is pure hype and just another tech bubble. It is doing some seemingly magical things, particularly in medicine, where it is already better than almost all doctors in spotting some heart problems and breast cancer. In both of these cases, the AI diagnosis is often earlier and more accurate than standard diagnoses, and the experts don’t really understand what it’s seeing.
Returning at last to the Suleyman book, I do think a lot of it is hype coming from an AI entrepreneur. He does a lot of hand-waving and has few details. I do believe his claim AI will have a profound effect in biology because it’s the only way we know to make sense of the large databases of proteins and DNA. I am skeptical of his claims about the imminence of AGI (= artificial general intelligence).
L^2
The good Christian should beware of mathematicians and all those who make empty prophecies. The danger already exists that mathematicians have made a covenant with the devil to darken the spirit and confine man in the bonds of Hell. — Augustinus
|
Literature (written and cinematic) is replete with classic if-we-had-only-known scenarios, many of which involve robots or biological experimentation. Typically these involve situations where the “solution” was to solve some other “dire” threat. [One can even find a few juicy ones in the news right now.] Going slow should be the cry of everyone on the cusp of any advanced technology. Those who insist on moving quickly are the ones who will create the “unintended consequences” results. Jonathan On Apr 16, 2024, at 10:24 AM, Bill Rising via groups.io <brising@...> wrote:
My guess with Musk is that he would like it to slow down simply because he's behind in the race. His AI company started only last July.
Bill
On Apr 16, 2024, at 10:11, John Robinson via groups.io <profilecovenant@...> wrote:
Thank you Lee for your in depth analysis. What are your thoughts on the use of Ai on deadly biological weapons by bad actors? By Ai starting to think & making their own decisions?
I don’t know if you watched Musk demonstration of 3 Robots demonstrating the bullet proof Cyber truck? Pretty cool unless a fake. Supposedly on another occasion one of the robots disagreed with a human & became aggressive with a human’s arm. Musk has been one of the largest proponents of slowing all this down & having government restrictions & safeguards on development, to protect humans. Few seem to agree.
John
On Apr 15, 2024, at 11:28?PM, Lee Larson via groups.io <leelarson@...> wrote:
?On Apr 14, 2024, at 6:42?AM, John Robinson via groups.io <profilecovenant@...> wrote:
I wonder what Lee & other math Wizards think of this?
John
Why AIs that tackle complex maths could be the next big breakthrough Mathematical AIs show machine intelligence may emerge from unexpected pursuits Read in New Scientist: I’ve been thinking about writing about AI here ever since February 3, when John posted the following:
Harry, I’m a guy that listens to hundreds of Audio Books, many Biographies, Autobiographies. Steve Jobs, Elon Musk, Charlie Munger, Ray Dalio, Tony Robbins and on and on it goes. I also listen to a plethora of investment works. I’VE NEVER HAD ONE TOTALLY CHANGE MY THINKING like “The Coming WAVE” by Mustafa Suleyman.
I added that book to my reading list, and finally finished it in the middle of March. I’ve been mulling over it and several other ones ever since.
To greatly oversimplify, right now what we call AI is really a huge pattern search engine. We feed it a bunch of data and it correlates the data into a big collection of connected nodes. It is also programmed with rules that tell it how to traverse the nodes. With a huge set of input data and a rich rule set, there might be many ways to traverse the graph. Given a command, it chooses the path its rules deem most likely to result in a correct result. This process can require a huge amount of computing, if the data set is large and the rules are complex.
In mathematics, the computer is given a data set of known results in an area. For example, it might be given a huge collection of algebra equations and the steps to their solutions. It’s also given the rules of algebra and logic. When you ask it to solve a new equation, it will look for something in its dataset that is similar the new equation. Since the thing in its dataset has a well-defined solution, it tries to use the same steps to solve the new equation. Programs such as Mathematica, Maple, and SageMath have been doing this for many years. They can solve any equation from a high school algebra class and much more.
The problem that comes up is when the problem can't be twisted into something similar to something else in the data set, the program is stuck.
All the examples I know of where AI systems solved difficult math problems are similar to what I described above, except the training data sets are many examples of proofs, and the rules are complicated. With a diverse enough data set and a rich set of rules, the program might very well be able to construct tricky proofs of difficult problems.
In fact, quite a few years ago I read an article in Math Intelligencer (I think??) about a computer program that was taught the basic facts of Euclidean geometry, which form a nice and fairly simple closed system. The programmers just told it to start proving facts by wandering around its database and following the logic rules it was programmed to use. The result was hundreds of “theorems”. Few of them were interesting and the interesting ones were already known—with a couple of exceptions. It did manage to find some new proofs of well-known theorems.
The moral to this story is that the computer didn’t really come up with anything new, because it didn’t have any new ideas, and this is the weakness with the AI mathematical proving programs. The great mathematicians are great because they came up with new ideas. The names we remember are Newton, Euler, Gauss, Cantor, Noether, Neumann, Turing, and many others, because they didn’t just just prove hard things, they came up with unprecedented methods and ideas.
Today’s AI programs are showing uncanny technical virtuosity in many areas, but they aren’t yet showing originality.
Don’t think what I’ve written means I think AI is pure hype and just another tech bubble. It is doing some seemingly magical things, particularly in medicine, where it is already better than almost all doctors in spotting some heart problems and breast cancer. In both of these cases, the AI diagnosis is often earlier and more accurate than standard diagnoses, and the experts don’t really understand what it’s seeing.
Returning at last to the Suleyman book, I do think a lot of it is hype coming from an AI entrepreneur. He does a lot of hand-waving and has few details. I do believe his claim AI will have a profound effect in biology because it’s the only way we know to make sense of the large databases of proteins and DNA. I am skeptical of his claims about the imminence of AGI (= artificial general intelligence).
L^2
The good Christian should beware of mathematicians and all those who make empty prophecies. The danger already exists that mathematicians have made a covenant with the devil to darken the spirit and confine man in the bonds of Hell. — Augustinus
-- Jonathan Fletcher Workplace Innovation Facilitator jonathan@... Kentuckiana FileMaker Developers Group ? Next Meeting: 3/26/24 Register at kyfmp.com/reg/ for a link
|
On Apr 16, 2024, at 10:11?AM, John Robinson via groups.io <profilecovenant@...> wrote: Thank you Lee for your in depth analysis. What are your thoughts on the use of Ai on deadly biological weapons by bad actors? ?By Ai starting to think & making their own decisions??
I think it’s inevitable AI will be used to perfect weapons of all kinds. There’s not much to be done about it. I don’t know if you watched Musk demonstration of 3 Robots demonstrating the bullet proof Cyber truck? ?Pretty cool unless a fake. ?Supposedly on another occasion one of the robots disagreed with a human & became aggressive with a human’s arm. ?Musk has been one of the largest proponents of slowing all this down & having government restrictions & safeguards on development, to protect humans. Few seem to agree.?
That Musk demonstration was kind of stupid. I thought the whack with the big hammer was more germane. I’m more interested in having my body panels survive fender benders than gun attacks. Of course, remember the epic failure with the “unbreakable” window a few minutes later in his demo.
I doubt the robot “disagreed” with the person. It was a matter of bad programming or the person being in the wrong place. There are many examples of workers injured, or even killed, by robots in factories. These factory robots are mindless automata with less smarts than an ant.
L^2
You can lead a man to Congress, but you can't make him think. — Milton Berle
|
On Apr 16, 2024, at 12:53?PM, Jonathan Fletcher via groups.io <lists@...> wrote: Literature (written and cinematic) is replete with classic if-we-had-only-known scenarios, many of which involve robots or biological experimentation. Typically these involve situations where the “solution” was to solve some other “dire” threat.
Science fiction—one of my favorite topics. There are plenty of science fiction stories on both sides of the conscious AI question. They aren’t always villains.
The I, Robot stories by Asimov from the 1950s and 1960s are pretty much pro-AI. They introduced the famous “three laws of robotics”, which have been much discussed lately. I have always thought The Caves of Steel?would make an excellent movie because it’s a murder mystery which also explores the rights of a human-like robot. The forgettable Will Smith movie shares nothing with the excellent and clever Asimov stories, except its title.
Heinlein’s 1966 novel The Moon is a Harsh Mistress?has a benevolent conscious AI as its main character. I think it's his best book.
Philip K. Dick’s 1968 novel?Do Androids Dream of Electric Sleep?explores the “humanity” of an artificial intelligence. It was made into the excellent 1982 movie Blade Runner,?which is a favorite of mine. (It’s one of the rare cases where the movie is better than the book.)
And then there’s HAL from 2001: A Space Odyssey. At first glance it appears HAL is malevolent, but it turns out HAL was just obeying its programming instructions. This comes out more clearly in the trilogy of novels written by Arthur C. Clarke, co-author of the screenplay, and in the sequel 2010: Odyssey Two. (The original movie was inspired by Clarke’s story The?Sentinel.)
And we have Data from Star Trek: The New Generation. In particular, watch the episode The Measure of a Man.?(Season 2, Episode 9)
I could go on and on with these.
Of course, there are plenty of evil AIs.
The most famous is probably the cyborg in The Terminator.
One that comes to my mind is from the 1966 novel?Colossus?by Dennis F. Jones. It was made into a not-so-good movie The Forbin Project.?It imagines what might happen when the entire defense of the country is turned over to an advanced AI called Colossus.
Let’s not forget Battlestar Galactica, the execrable 1978 version was reborn as a much better 2004 series. The Cylons are describes as “cybernetic” beings who want to kill all humans.
Then there's?Westworld!?Yul Brynner was a marvelously evil robot in the 1973 film. The 2016 TV series has a mix of good and bad robots.
Going slow should be the cry of everyone on the cusp of any advanced technology.?
Those who insist on moving quickly are the ones who will create the “unintended consequences” results.
I somewhat disagree with this. The perfume is out of the bottle and people all over the world are going to continue down this road no matter what we think. (Mixed metaphor!) I don’t think any slow-down rules in the USA are going to stop militaries around the world from using the best AI they can develop.
Besides, we’re already swimming in a bath of smaller AIs. As I type this, the computer is guessing word completions and correcting my spelling. The Photos app uses some AI to do face recognition. The suggestion algorithms in YouTube and X probably have a generous sprinkling of AI. Most of the best photo-editing programs have a lot of embedded AI to do magical things like removing that unwanted person from the background of your otherwise perfect vacation photo.
AI is with us to stay, but most of it will take the form of invisible helpers.
A lot of the anti-AI grumbling that’s so much in the news comes from people who don’t really understand AI, or people who see that AI might put them out of work. It’s a modern version of the Luddite rebellion.
Watching CSPAN to eavesdrop on Congressional hearings about AI is cringe-worthy. Our Congress-critters are almost universally techno-illiterate.
L^2
Go fast and break things! — Mark Zuckerberg
|
On Apr 16, 2024, at 10:32 PM, Lee Larson via groups.io <leelarson@...> wrote: Go fast and break things! — Mark Zuckerberg
Said the creator of SkyNet. -- Jonathan Fletcher Workplace Innovation Facilitator jonathan@... Kentuckiana FileMaker Developers Group ? Next Meeting: 4/23/24 Register at kyfmp.com/reg/ for a link
|
Lee, so refreshing to have your comments. This has been a topic you relish.?
John
toggle quoted message
Show quoted text
On Apr 16, 2024, at 10:32?PM, Lee Larson via groups.io <leelarson@...> wrote:
? On Apr 16, 2024, at 12:53?PM, Jonathan Fletcher via groups.io <lists@...> wrote: Literature (written and cinematic) is replete with classic if-we-had-only-known scenarios, many of which involve robots or biological experimentation. Typically these involve situations where the “solution” was to solve some other “dire” threat.
Science fiction—one of my favorite topics. There are plenty of science fiction stories on both sides of the conscious AI question. They aren’t always villains.
The I, Robot stories by Asimov from the 1950s and 1960s are pretty much pro-AI. They introduced the famous “three laws of robotics”, which have been much discussed lately. I have always thought The Caves of Steel?would make an excellent movie because it’s a murder mystery which also explores the rights of a human-like robot. The forgettable Will Smith movie shares nothing with the excellent and clever Asimov stories, except its title.
Heinlein’s 1966 novel The Moon is a Harsh Mistress?has a benevolent conscious AI as its main character. I think it's his best book.
Philip K. Dick’s 1968 novel?Do Androids Dream of Electric Sleep?explores the “humanity” of an artificial intelligence. It was made into the excellent 1982 movie Blade Runner,?which is a favorite of mine. (It’s one of the rare cases where the movie is better than the book.)
And then there’s HAL from 2001: A Space Odyssey. At first glance it appears HAL is malevolent, but it turns out HAL was just obeying its programming instructions. This comes out more clearly in the trilogy of novels written by Arthur C. Clarke, co-author of the screenplay, and in the sequel 2010: Odyssey Two. (The original movie was inspired by Clarke’s story The?Sentinel.)
And we have Data from Star Trek: The New Generation. In particular, watch the episode The Measure of a Man.?(Season 2, Episode 9)
I could go on and on with these.
Of course, there are plenty of evil AIs.
The most famous is probably the cyborg in The Terminator.
One that comes to my mind is from the 1966 novel?Colossus?by Dennis F. Jones. It was made into a not-so-good movie The Forbin Project.?It imagines what might happen when the entire defense of the country is turned over to an advanced AI called Colossus.
Let’s not forget Battlestar Galactica, the execrable 1978 version was reborn as a much better 2004 series. The Cylons are describes as “cybernetic” beings who want to kill all humans.
Then there's?Westworld!?Yul Brynner was a marvelously evil robot in the 1973 film. The 2016 TV series has a mix of good and bad robots.
Going slow should be the cry of everyone on the cusp of any advanced technology.?
Those who insist on moving quickly are the ones who will create the “unintended consequences” results.
I somewhat disagree with this. The perfume is out of the bottle and people all over the world are going to continue down this road no matter what we think. (Mixed metaphor!) I don’t think any slow-down rules in the USA are going to stop militaries around the world from using the best AI they can develop.
Besides, we’re already swimming in a bath of smaller AIs. As I type this, the computer is guessing word completions and correcting my spelling. The Photos app uses some AI to do face recognition. The suggestion algorithms in YouTube and X probably have a generous sprinkling of AI. Most of the best photo-editing programs have a lot of embedded AI to do magical things like removing that unwanted person from the background of your otherwise perfect vacation photo.
AI is with us to stay, but most of it will take the form of invisible helpers.
A lot of the anti-AI grumbling that’s so much in the news comes from people who don’t really understand AI, or people who see that AI might put them out of work. It’s a modern version of the Luddite rebellion.
Watching CSPAN to eavesdrop on Congressional hearings about AI is cringe-worthy. Our Congress-critters are almost universally techno-illiterate.
L^2
Go fast and break things! — Mark Zuckerberg
|