1. #1
    Administrator

    Join Date
    Aug 2011
    Location
    Lahore, Pakistan
    Posts
    4,460
    Blog Entries
    9

    Why Microsoft's chat robot developed insulting language on Twitter and telling us about the future of artificial intelligence experiment

    Artificial Intelligence has advanced enormously in recent years, but progress has covered all categories, as shown experiment done by Microsoft with "robot" chat called Tay who learned a lot of nonsense and used bad language after he interacted with youth on Twitter less than a day.

    The robot has been designed for educational, but got to post messages or messages with sexual content that praises Hitler.

    Why did this happen, you can read below.


    Tay Tweet is a chat-bot, a user virtual Twitter that Microsoft created to test how it understands conversations with young people and to test and will post after you follow what I write young.

    But in less than 24 hours the experiment was stopped because "robot" posted defamatory messages that have amazed many people. For half a century are tested software which mimic conversational discussion with a man, one exciting concept. In recent years efforts have been strengthened, especially as there are more and more "prompts" that understand commands more sophisticated.

    Chat-bot Microsoft degenerated language deplorable and because those who created it have not previously some filters have predicted that in interaction with youth, Tay will "learn" from them means verbal violence, because on Twitter swearing much, and messages racist are present in abundance. a "blacklist" that include insulting words would have been useful and would limit the number of messages insulting, says an expert in artificial intelligence, quoted by Business Insider. It said that Microsoft would have to prepare better software because it was expected that young people on Twitter, exactly the target audience for Tay, test you can chat bot, it to respond to things not beautiful. the whole story looks like any artificial intelligence program will be extensively tested in "real world" where interactions are unpredictable. Only then would be used and useful to business. Microsoft has given a message after experiment in which he explained that was the main problem. "Unfortunately, in the first 24 hours it was online I found a coordinated effort on the part of users who wanted him abusing Tay, so to use his communication skills to respond in improperly . " Tay Tweet could not distinguish the words harmless the offensive nor conceptions accepted by the taboo, so in just a few hours appropriated the language of violent interactions and posted messages that have horrified many people. in relating the story, Financial Times notes that the experiment was unsuccessful but highly useful because it shows that these artificial intelligence software are not as smart as their creators say they are. Faced interaction with ordinary people, the algorithms can give aftershocks spectacular, but nothing matched. In addition, people confronted with discussions with a program of artificial intelligence, will try out of curiosity to test the maximum and will put questions Strange , racist, xenophobic, about sex and about anything you can imagine. The programs have "trained" to cope successfully these questions, not "absorb" everything like a sponge.

  2. # ADS
    Circuit advertisement
    Join Date
    Always
    Location
    Advertising world
    Posts
    Many
     

Visitors found this page by searching for:

Nobody landed on this page from a search engine, yet!
SEO Blog

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

Log in

Log in