зенитные комплексы аль-эфесби
Mar. 30th, 2015 02:09 pmComputers are learning to recognize objects with near-human ability. But Cornell researchers have found that computers, like humans, can be fooled by optical illusions, which raises security concerns and opens new avenues for research in computer vision.
Cornell graduate student Jason Yosinski and colleagues at the University of Wyoming Evolving Artificial Intelligence Laboratory have created images that look to humans like white noise or random geometric patterns but which computers identify with great confidence as common objects.
http://news.cornell.edu/stories/2015/03/images-fool-computer-vision-raise-security-concerns

Cornell graduate student Jason Yosinski and colleagues at the University of Wyoming Evolving Artificial Intelligence Laboratory have created images that look to humans like white noise or random geometric patterns but which computers identify with great confidence as common objects.
http://news.cornell.edu/stories/2015/03/images-fool-computer-vision-raise-security-concerns

(no subject)
Date: 2015-03-31 07:51 am (UTC)"For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network."
http://www.i-programmer.info/news/105-artificial-intelligence/7352-the-flaw-lurking-in-every-deep-neural-net.html