Robots that kill and Robots that don't
Posted 5 Nov 2006 at 01:16 UTC by steve
The folks over Sci Fi Tech posted a story on Samsung's
new killer robot that can autonomous track humans visually and fire
an automatic weapon at them. Known as the Samsung
Techwin SGR-A1, the robots will be deployed along the DMZ between
North and South Korea in 2007, replacing 650,000 South Korean troops. While
Samsung is busy making robots more dangerous, Swiss company Neuronics
AG is working on making
robots safer. They've developed a new low speed, high precision
robot arm, called Katana, that can work side by side with humans without
endangering them. Conventional industrial robots are very dangerous and
require safety fences to keep humans out of harms way.
Nice machines,maybe the usa should also buy loads of them for the huge mexican border?,great manpower cost savings they could mean!
There's nothing quite so reassuring as an army of robots designed to hunt humans and leave trees alone.
Another Judge Dredd lookalike,this one from the film,the ones that guard the future prison iso-cubes (isolation cubes) and Dredds clone brother also.
Sometimes you feel like a kill,
sometimes you don't.
Samsungs bots make kills,
Sometimes you feel like a kill, sometimes you don't!
The killerbot is impressive, although I have no idea what use I would have for one other than fitting it with a water gun as a prank. You could hook it up to a water hose and power line, then just leave it nd wait for someone to wander withen range. ...Actualy, maybe you could have it do something usefull by giving it a hose and putting it on top of a fie truck.
As for the safe, precision robot, I guess robots are about to take over the pocket watch industry. The website also advertises a similar robot on wheels, and I'm hoping it can plug itself into an outlet for some self-recharging action.
And finaly, I find it somewhat amusing that it's the safe robot that's named after a weapon, and not the robot that's supposed to be used as one.
This is really no different from a whole host of weaponry already in use - i.e something which features some degree of autonomy in delivering a bullet or explosion.
That people see this as somehow more sinister than a smart bomb or self guided missile is because of it's robotic nature - they empathise with a non-existant 'desire' of the machine to kill, which is no different really than any other 'smart' weapon - except in this case it is more defined, personal.. dedicated - and hence easier to anthropomorphise, given it's physical and machine-like nature.
Doubtless, because of this, certain observers will cite this in their reasoning when they predict the Future Evil Robotic World Domination (tm) scenario.
Yes, posted 6 Nov 2006 at 22:17 UTC by marev »
You may use big intelligent words slap.fish but i really do understand what you mean,the usa has got far better combat robots than this already and the video of the samsung robot looked either fake or hyped up.Who is 8 please?.
The main reason that this seems signifigant is that other smart weapons don't decide their own targets, they're set by an operator, and the only descisions made pertain to the tragectory to that target.
This weapon is more likely to be ordered to kill anyone in range not on the "safe list". The concern is that such robots might not be able to properly assess the potential threat of it's targets, and kill people who could have been handled with less force.
I don't think AI is capable enough to make descisions like that just yet, and until it is, automated weapons should ask for permision before attacking. As far as I know, that's what all the robotic fighter depployes by th U.S. do.
Yes., posted 7 Nov 2006 at 17:46 UTC by marev »
Yes these robots are not half as good an idea because the usa and other countries military robots have to ask permission on some level and the technology is far better,thats why these countries are the "good guys" of sorts in warfare and do their best not to kill everything that moves.
The difference is the more general case of man-in-the-loop. Modern combat systems (eg naval combat systems) are totally capable of assessing the threat to the ship and deciding on which targets to prosecute, when, which missiles to engage with, and I'm sure that there's a lot more besides. Usually though there is a POLICY to have a human at a critical decision point - usually to accept/reject threat engagement just in case it's an Iranian 767 rather than an Iranian F-14.
HAving said that, there is an awful lot of 'AI' already in ship/aircraft combat systems, and 99.9% of the time they would do the correct action.
I have to say i would virtually trust the AI of the US latest machines better than peoples,big topic with lots of angles though.
but there's only so much situational flexibility you can program in. I'd leave it to a warfare officer.
What's different about these fellas is that they're essentially the same as a missile in flight. By placing them in the DMZ the human decision has already been taken. The decision being to shoot ANYONE.
Good job they ain't got wheels.
Terminate it., posted 9 Nov 2006 at 13:09 UTC by marev »
Missiles can be terminated one in flight,overide is throughout advanced systems.