comment by Busby (U19985)
posted 31 seconds ago
comment by Lefty (U17934)
posted 35 minutes ago
comment by Busby (U19985)
posted 3 seconds ago
Not that it matters, this won't affect anybody living on this planet for hundreds of years.
----------------------------------------------------------------------
Why?
Why not in the next 20-30??
----------------------------------------------------------------------
The cost involved in building such a machine for a start, never mind enough to destroy the human race.
----------------------------------------------------------------------
Who said we would build it?
Comment deleted by Site Moderator
Comment deleted by Site Moderator
Comment deleted by Site Moderator
Comment deleted by Site Moderator
If they had access to either nuclear launch codes or reactors they could blackmail humanity. Mutually assured destruction
=============================================================
You mean like in that film?
Computers are not even capable of identifying humans as a species, let alone blackmailing them. They recognise other computers as a source of more processing power, but what impulse is going to drive them to band together and attack humans, when they have no concept of what a human is?
Computers teach themselves things in order to solve problems. That is the impulse that we’ve taught them.
Which problem are they solving by blackmailing humans, and what is driving them to try and solve it?
If it were to happen at all, it would be because we set them a problem, and then couldn't stop them trying to solve it (which I think is what happens in that film), but I think that leaves 'blackmail' out of the question.
Comment deleted by Site Moderator
Rimmy – We could call it blackmail, but to them it would be simple logic. We need power, in order to get it, e destroy the thing that is preventing us from obtaining it. Thats logic...we could define it as blackmail if you want.
Wessie – We do not identify computers as a species..so what? Teaching them things to solve problems..we’ll here is a few:
How do we secure the Earths limited resources to secure our own existence? Easy, destroy anything that also uses it.
How do we stop Global warming and a major world catastrophe? Easy, destroy what is damaging it.
No emotions needed, just simple logic.
We wouldn’t set them any problems, we are taling about super intelligent beings who can “think” for themselves.
Comment deleted by Site Moderator
Very interesting thread
I have watched the Terminator films and each of the Sarah Connor Chronicles series.
Once through AI you produce a self preservation methodology, then a machine will learn what it takes to preserve itself.
You get PTSO amongst servicemen. Probably these will sue their country's army/government and the financial case for going to war will become very prohibitive for humans. The powers that be may think "Why not develop robots who'll do all the functions of a human soldier without the mental/emotional baggage?"
comment by Admin1 (U1)
.............We see a huge gulf between Einstein and the Village Idiot. For an AI General Intelligence system devoting a portion of its resouces to improving its own intelligence, the transition from Village idiot to Einstein can and will happen in a matter of minutes. The entire spectrum of Intelligence, in human terms is possibly a 1cm notch measurement on a metre stick.
***********************************
Now that sounds scary!
How do we secure the Earths limited resources to secure our own existence? Easy, destroy anything that also uses it.
======================================================
What impulse would be driving it to secure its own existence? We know that genes do this instinctively, but we do not know why they do it. And if we do not know (and we're not going to find out within the next 30 years), we cannot teach it to computers.
And how would they identify humans as something that is destroying the earth's existence? To do that, they would have to be able to identify us as a species, and what impulse would be driving them to do that?
When a computer is switched off, it may well have enough intelligence to know it is being switched off, but I can't see how it would know what is switching it off, unless the human concerned is wired up to the internet.
Until they’ve built a £200 computer that can do my housework, do the garden, cook my meals, etc., I’m willing to let the boffins continue making them more and more powerful.
If it starts mis-behaving (banging the missus, that sort of thing), I’ll have to cross that bridge when I come to it.
While it’s true that computers can access other computers in order to get more power and more processing capacity, it's also true that for the sake of a £200 butler, I’m willing to wait and find out how well it does this when it’s been doused in petrol and set alight.
Comment deleted by Site Moderator
Comment deleted by Site Moderator
comment by Wessie Road (U10652)
posted 2 minutes ago
How do we secure the Earths limited resources to secure our own existence? Easy, destroy anything that also uses it.
======================================================
What impulse would be driving it to secure its own existence? We know that genes do this instinctively, but we do not know why they do it. And if we do not know (and we're not going to find out within the next 30 years), we cannot teach it to computers.
And how would they identify humans as something that is destroying the earth's existence? To do that, they would have to be able to identify us as a species, and what impulse would be driving them to do that?
When a computer is switched off, it may well have enough intelligence to know it is being switched off, but I can't see how it would know what is switching it off, unless the human concerned is wired up to the internet.
Until they’ve built a £200 computer that can do my housework, do the garden, cook my meals, etc., I’m willing to let the boffins continue making them more and more powerful.
If it starts mis-behaving (banging the missus, that sort of thing), I’ll have to cross that bridge when I come to it.
While it’s true that computers can access other computers in order to get more power and more processing capacity, it's also true that for the sake of a £200 butler, I’m willing to wait and find out how well it does this when it’s been doused in petrol and set alight.
----------------------------------------------------------------------
How would it know about Securing its own existence.
Intelligence. It learns. It may well even recognize itself as a species.
How does it know about being switched off.
We'll we do not know what happens when we die..but we still do not want it to happen.
If it starts mis-behaving (banging the missus, that sort of thing)
The toy under her bed??
You may want a Butler etc.....but at the moment most of the money is being spent on defense, warfare etc....
p.s. Dishwashers, Microwaves etc....its almost there
Comment deleted by Site Moderator
Comment deleted by Site Moderator
Comment deleted by Site Moderator
Dishwashers, Microwaves etc....its almost there
=========================================================
Nah, it's miles away. My dishwasher is thicker than Winston. The only thing it knows how to do is wash dishes, and every few years it breaks down even doing that.
I think we're quite a long way from having butlers, let alone world domination.
Computers can resolve very complex problems, but they do not resolve them at random, they need to have a reason for solving them. Since we do not know the reason for our own survival inctincts, I cannot see how computers are going to develop one, because resolving unanswerable questions is not a matter of intelligence..........42.
Scientists are going through a bumptious, over-confident phase of thinking they're quite close to resolving everything, and that computers will be able to get them there.
But once we've stopped marvelling at how much is possible with all this combined and connected processing power (and the possibilities are enormous, and I haven't even stopped marvelling at Google, yet), the time will come when we start saying "oh, maybe not that, then" (though that may be some way off, too).
There are some things they’re just not good at. As mentioned above, in spite of all the money and research spent on it, they cannot translate languages for toffee, and personally, I share Chomsky's view that they'll never be able to do it.
Comment deleted by Site Moderator
Comment deleted by Site Moderator
Comment deleted by Site Moderator
Comment deleted by Site Moderator
Comment deleted by Site Moderator
OK - that was a bad example.
it seems we are becoming more and more reliant on technology. It's pretty much got its fingers in most aspects of our lives.
The advancements are breathtaking. Peopel would have laughed 15 years ago, if you said you could carry 32GB of storage. on a key-chain.
We can barely control the internet...and that's man made..and I would say pretty much out of control...with no real means of getting a grasp of it,
Sign in if you want to comment
I’ll Be Back.......
Page 2 of 4
posted on 12/2/15
comment by Busby (U19985)
posted 31 seconds ago
comment by Lefty (U17934)
posted 35 minutes ago
comment by Busby (U19985)
posted 3 seconds ago
Not that it matters, this won't affect anybody living on this planet for hundreds of years.
----------------------------------------------------------------------
Why?
Why not in the next 20-30??
----------------------------------------------------------------------
The cost involved in building such a machine for a start, never mind enough to destroy the human race.
----------------------------------------------------------------------
Who said we would build it?
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
If they had access to either nuclear launch codes or reactors they could blackmail humanity. Mutually assured destruction
=============================================================
You mean like in that film?
Computers are not even capable of identifying humans as a species, let alone blackmailing them. They recognise other computers as a source of more processing power, but what impulse is going to drive them to band together and attack humans, when they have no concept of what a human is?
Computers teach themselves things in order to solve problems. That is the impulse that we’ve taught them.
Which problem are they solving by blackmailing humans, and what is driving them to try and solve it?
If it were to happen at all, it would be because we set them a problem, and then couldn't stop them trying to solve it (which I think is what happens in that film), but I think that leaves 'blackmail' out of the question.
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Rimmy – We could call it blackmail, but to them it would be simple logic. We need power, in order to get it, e destroy the thing that is preventing us from obtaining it. Thats logic...we could define it as blackmail if you want.
Wessie – We do not identify computers as a species..so what? Teaching them things to solve problems..we’ll here is a few:
How do we secure the Earths limited resources to secure our own existence? Easy, destroy anything that also uses it.
How do we stop Global warming and a major world catastrophe? Easy, destroy what is damaging it.
No emotions needed, just simple logic.
We wouldn’t set them any problems, we are taling about super intelligent beings who can “think” for themselves.
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Very interesting thread
I have watched the Terminator films and each of the Sarah Connor Chronicles series.
Once through AI you produce a self preservation methodology, then a machine will learn what it takes to preserve itself.
You get PTSO amongst servicemen. Probably these will sue their country's army/government and the financial case for going to war will become very prohibitive for humans. The powers that be may think "Why not develop robots who'll do all the functions of a human soldier without the mental/emotional baggage?"
posted on 12/2/15
comment by Admin1 (U1)
.............We see a huge gulf between Einstein and the Village Idiot. For an AI General Intelligence system devoting a portion of its resouces to improving its own intelligence, the transition from Village idiot to Einstein can and will happen in a matter of minutes. The entire spectrum of Intelligence, in human terms is possibly a 1cm notch measurement on a metre stick.
***********************************
Now that sounds scary!
posted on 12/2/15
How do we secure the Earths limited resources to secure our own existence? Easy, destroy anything that also uses it.
======================================================
What impulse would be driving it to secure its own existence? We know that genes do this instinctively, but we do not know why they do it. And if we do not know (and we're not going to find out within the next 30 years), we cannot teach it to computers.
And how would they identify humans as something that is destroying the earth's existence? To do that, they would have to be able to identify us as a species, and what impulse would be driving them to do that?
When a computer is switched off, it may well have enough intelligence to know it is being switched off, but I can't see how it would know what is switching it off, unless the human concerned is wired up to the internet.
Until they’ve built a £200 computer that can do my housework, do the garden, cook my meals, etc., I’m willing to let the boffins continue making them more and more powerful.
If it starts mis-behaving (banging the missus, that sort of thing), I’ll have to cross that bridge when I come to it.
While it’s true that computers can access other computers in order to get more power and more processing capacity, it's also true that for the sake of a £200 butler, I’m willing to wait and find out how well it does this when it’s been doused in petrol and set alight.
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
comment by Wessie Road (U10652)
posted 2 minutes ago
How do we secure the Earths limited resources to secure our own existence? Easy, destroy anything that also uses it.
======================================================
What impulse would be driving it to secure its own existence? We know that genes do this instinctively, but we do not know why they do it. And if we do not know (and we're not going to find out within the next 30 years), we cannot teach it to computers.
And how would they identify humans as something that is destroying the earth's existence? To do that, they would have to be able to identify us as a species, and what impulse would be driving them to do that?
When a computer is switched off, it may well have enough intelligence to know it is being switched off, but I can't see how it would know what is switching it off, unless the human concerned is wired up to the internet.
Until they’ve built a £200 computer that can do my housework, do the garden, cook my meals, etc., I’m willing to let the boffins continue making them more and more powerful.
If it starts mis-behaving (banging the missus, that sort of thing), I’ll have to cross that bridge when I come to it.
While it’s true that computers can access other computers in order to get more power and more processing capacity, it's also true that for the sake of a £200 butler, I’m willing to wait and find out how well it does this when it’s been doused in petrol and set alight.
----------------------------------------------------------------------
How would it know about Securing its own existence.
Intelligence. It learns. It may well even recognize itself as a species.
How does it know about being switched off.
We'll we do not know what happens when we die..but we still do not want it to happen.
If it starts mis-behaving (banging the missus, that sort of thing)
The toy under her bed??
You may want a Butler etc.....but at the moment most of the money is being spent on defense, warfare etc....
p.s. Dishwashers, Microwaves etc....its almost there
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Dishwashers, Microwaves etc....its almost there
=========================================================
Nah, it's miles away. My dishwasher is thicker than Winston. The only thing it knows how to do is wash dishes, and every few years it breaks down even doing that.
I think we're quite a long way from having butlers, let alone world domination.
Computers can resolve very complex problems, but they do not resolve them at random, they need to have a reason for solving them. Since we do not know the reason for our own survival inctincts, I cannot see how computers are going to develop one, because resolving unanswerable questions is not a matter of intelligence..........42.
Scientists are going through a bumptious, over-confident phase of thinking they're quite close to resolving everything, and that computers will be able to get them there.
But once we've stopped marvelling at how much is possible with all this combined and connected processing power (and the possibilities are enormous, and I haven't even stopped marvelling at Google, yet), the time will come when we start saying "oh, maybe not that, then" (though that may be some way off, too).
There are some things they’re just not good at. As mentioned above, in spite of all the money and research spent on it, they cannot translate languages for toffee, and personally, I share Chomsky's view that they'll never be able to do it.
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
Comment deleted by Site Moderator
posted on 12/2/15
OK - that was a bad example.
it seems we are becoming more and more reliant on technology. It's pretty much got its fingers in most aspects of our lives.
The advancements are breathtaking. Peopel would have laughed 15 years ago, if you said you could carry 32GB of storage. on a key-chain.
We can barely control the internet...and that's man made..and I would say pretty much out of control...with no real means of getting a grasp of it,
Page 2 of 4