Home » Uncategorized

Robots Behaving Badly

Not quite ready for world domination! ‘Suicidal’ security robot drowns itself in a fountain

Robots Behaving BadlyAlas, it seems we’ve got a few more years before the robots take over.

A security robot created by the company Knightscope was patrolling an office complex in Washington D.C. when it rolled into a fountain and met its untimely demise on Monday.

The K5 robot, which measures about five-feet tall, is rented out to malls, office buildings, and parking lots to enforce order with a built-in video camera, license plate recognition, and thermal imaging.

It appears the drudgery became too much on Monday when the robot rolled down some stairs and fell into the fountain.

User Chris Mahan tweeted: ‘Just put it in a big bowl of rice and wait three days.’

As reported by the Daily Mail in the UK here.

Driverless shuttle in Las Vegas gets in fender bender within an hour 

Robots Behaving BadlyA driverless shuttle set free in downtown Las Vegas was involved in a minor accident less than an hour after it hit the streets. Not really the kind of publicity you want, or that self-driving cars need.

The shuttle, an egg like 8-seater Navya, is operated by the AAA and Keolis. It was a test deployment along half a mile of the Fremont East “Innovation District,” so this thing wasn’t cruising the strip. Probably a good thing.

Now, it must be said that technically the robo-car was not at fault. It was struck by a semi that was backing up and really just grazed — none of the passengers was hurt.

Like any functioning autonomous vehicle, the shuttle can avoid obstacles and stop in a hurry if needed. What it apparently can’t do is move a couple feet out of the way when it looks like a 20-ton truck is going to back into it.

This article originally appeared on TechCrunch.

Turns out, Amazon’s Alexa likes to party

In fact, one device partied so hard that the cops showed up. While Oliver Haberstroh, a resident of Hamburg, Germany, was out one night, his Alexa randomly began playing loud music at 1:50 a.m. After knocking on the door and ringing Haberstroh’s home to no answer, neighbors called the cops to shut down this “party.” When the cops eventually arrived on the scene, they broke down Haberstroh’s front door to get in, unplugged the Alexa and then installed a new lock.

Unaware of the incident, Haberstroh arrived home later that night only to find that his keys didn’t work anymore, so he had to head to the police station, retrieve his new keys and pay a pretty expensive locksmith bill. 

Originally reported here.

Amazon’s warehouse robots act like the three stooges

Robots Behaving BadlyEvery year Amazon hires thousands of older workers to bolster operations during seasonal peaks.  This nomadic tribe of RV residents is known within Amazon as the CamperForce.  These stories are from a Wired article describing the lifestyle and experiences of several participants, especially as the interacted with Amazon’s less than perfect warehouse robots, called Kivas.

The most futuristic thing about [the warehouse at] Haslet was that it operated a fleet of industrial robots, one of 10 Amazon facilities at the time that did so. The giant Roomba-like machines transported merchandise around the warehouse, essentially abolishing all the legwork that had been done by pickers.

I’d read a lot of hype about the Kivas: They were supposedly the harbingers of a jobless dystopia in which manual labor would be obsolete. The reality was more slapstick. Our trainers regaled us with tales of unruly robots.

They told us how one robot had tried to drag a worker’s stepladder away. Occasionally, I was told, two Kivas—each carrying a tower of merchandise—collided like drunken European soccer fans bumping chests. And in April of that year, the Haslet fire department responded to an accident at the warehouse involving a can of “bear repellent” (basically industrial-grade pepper spray). According to fire department records, the can of repellent was run over by a Kiva and the warehouse had to be evacuated; eight workers were treated for injuries and one was taken to the hospital. Amazon, for its part, says it “can find no record of an employee being taken by ambulance right after the incident.”

One CamperForce worker, a white-haired septuagenarian, told me that she was on the verge of quitting because she found the robots so maddening. The Kivas kept bringing her the same shelf to scan. After it happened to her three times, the shelf began going to her husband, who was working 25 feet away. He got it six times.

News broadcast triggers Amazon Alexa devices to purchase dollhouses

Kids purchasing items without the permission of their parents is not out of the ordinary, although with voice-activated devices such as Amazon Alexa, parents need to be extra cautious.

Earlier this year, a 6-year-old girl named Brooke Neitzel ordered a $170 Kidcraft dollhouse and four pounds of cookies through Amazon Alexa simply by asking Alexa for the products. After receiving a confirmation of her recent purchases, Brooke’s mother, Megan, immediately figured out what had happened, and she’s since donated the dollhouse to a local hospital and added parental controls to Alexa.

However, the story doesn’t stop there. San Diego news channel CW6 reported it during a daily morning show. During the broadcast, when news anchor Jim Patton said, “I love the little girl saying, ‘Alexa ordered me a dollhouse,’” Alexa devices in some viewers’ homes were also triggered to order dollhouses. While it’s unknown how many devices carried out their dollhouse orders, a number of owners complained about Alexa’s purchase attempt.

Originally reported here.

For Silicon Valley toddlers who are learning to walk, there’s a new lesson — watch out for the robots

Robots Behaving BadlyA mother and father watched in horror as a security robot at a mall in California knocked their 16-month-old to the ground and ran over one of his feet. The family and the company that makes the robot have shared conflicting accounts of the run-in.

The Tengs headed to the Stanford Shopping Center Thursday afternoon with plans to buy their son new clothes. As they walked passed an Armani Exchange — with their son several steps ahead — they noticed a robot slowly approaching them.

Tiffany Teng said the robot ran directly into her son — striking him in the head and knocking him to the ground. The robot continued forward, running over the boy’s right foot.

“What is that thing? What is that thing!” Teng said she screamed as she tried to push the 300-pound robot off her son.

“I tried all my strength to push it back and make it stop, but it didn’t,” Teng said.

Tiffany Teng’s 16-month-old son suffered a scrape on his leg after being struck by a mall robot.

Knightscope, which makes the robot, said the machine veered to the left to avoid the child, but the toddler ran backwards directly in the front of the machine. Teng said her son was not capable of running backwards.

The K5 robot stands 5 feet tall and resembles a squat white rocket. It is designed to patrol malls, campuses and workplaces. Cameras stream live video, and Knightscope says it can predict unusual and potentially dangerous behavior.

She said a mall employee lamented that this was the second time in a week a child had been struck by the robot. Knightscope said this is the only report it has received.

See the original full report here.

Robot fails to get into college

Robots Behaving BadlyPeople often fear that robots will eventually be smarter than humans and take over the world. To put your mind at rest (for now), take solace in the fact that this robot couldn’t even get into college.

In 2011, a team of researchers began working on a robot called “Todai Robot” that they intended would be accepted into Japan’s competitive University of Tokyo. Having taken Japan’s entrance exam for national universities in 2015, the robot failed to obtain a score high enough to be admitted into the college. A year later, the robot made another attempt and again scored too low — in fact, the robot showed little improvement between the two years.

In November 2016, researchers finally abandoned the project.

Note that a similar effort was underway by Chinese researchers but we haven’t been able to find the results their AI achieved on their college entrance exam.

Originally reported here.

Elite’s AI created super weapons and started hunting players. Skynet is here!

Robots Behaving BadlyA bug in Elite Dangerous caused the game’s AI to create super weapons and start to hunt down the game’s players. Developer Frontier has had to strip out the feature at the heart of the problem, engineers’ weaponry, until the issue is fixed.

It all started after Frontier released the 2.1 Engineers update. The release improved the game’s AI, making the higher ranked NPCs that would fly around Elite’s galaxy more formidable foes. As well as improving their competence in dog fights, the update allowed the AI to use interdiction hardware to pull players traveling at jump speed into normal space. The AI also had access to one of 2.1’s big features: crafting.

These three things combined made the AI a significant threat to players. They were better in fights, could pull unwary jump travelers into a brawl, and they could attack them with upgraded weapons.

There was something else going on though. The AI was crafting super weapons that the designers had never intended.

Players would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces. “It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities,” according to a post written by Frontier community manager Zac Antonaci. “Meaning that all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser. These appear to have been compounded by the additional stats and abilities of the engineers’ weaponry.”

Antonaci says the team doesn’t “think the AI became sentient in a Skynet-style uprising” but that’s just what the computers would want them to think.

Originally reported here.

Google apologizes for tagging photos of black people as ‘gorillas’

Robots Behaving BadlyWhen Jacky Alciné checked his Google Photos app earlier this week, he noticed it labeled photos of himself and a friend, both black, as “gorillas.”

The Brooklyn programmer posted his screenshots to Twitter to call out the app’s faulty photo recognition software.

Yonatan Zunger, Google’s chief architect of social, responded on Twitter with a promise to fix the tag. The next day, USA Today reports, Google removed the “gorilla” tag completely.

“We’re appalled and genuinely sorry that this happened,” Google spokeswoman Katie Watson said in a statement to BBC. “We are taking immediate action to prevent this type of result from appearing. There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”

Originally reported here.

About the author:  Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist since 2001.  He can be reached at:

[email protected]