ML NEWS - 181016

16/10/2018

Photo by Johannes Plenio on Unsplash

Apple hopes you'll figure out what to do with AI on the iPhone XS
One of the toughest problems in machine learning, within the broader field of AI, is to to figure out what problem the computer should be solving. Computers can only learn and understand, if they understand at all, when something is framed as a matter of finding a solution to a problem.
Apple is approaching that challenge by hoping to lure developers to use its chips and software programming tools to supply the new use cases for neural networks on a mobile device.
https://www.zdnet.com/article/apple-hopes-youll-figure-out-what-to-do-with-ai/

Unbiased algorithms can still be problematic
Algorithms are sets of rules that computers follow in order to solve problems and make decisions about a particular course of action. Whether it’s the type of information we receive, the information people see about us, the jobs we get hired to do, the credit cards we get approved for, and, down the road, the driverless cars that either see us or don’t, algorithms are increasingly becoming a big part of our lives. But there is an inherent problem with algorithms that begins at the most base level and persists throughout its adaption: human bias that is baked into these machine-based decision-makers.
https://techcrunch.com/2018/09/30/unbiased-algorithms-can-still-be-problematic/?guccounter=1

Artificial Intelligence Can Reinforce Bias, Cloud Giants Announce Tools For AI Fairness
Unfairly trained Artificial Intelligence (AI) systems can reinforce bias, therefore AI systems must be trained fairly. Experts say AI fairness is a dataset issue for each specific machine learning model. AI fairness is a newly recognized challenge. The big cloud providers are in the process of developing and announcing tools to help address AI fairness.
The core challenge for AI is that deep learning models are “black boxes”. It is very difficult—and often simply not possible—for mere humans to understand how individual training data points influence each output classification (inference) decision. The term “opaque” is also used to describe this hidden classification behavior. It’s hard to trust a system when you can’t understand how it makes decisions.
https://www.forbes.com/sites/paulteich/2018/09/24/artificial-intelligence-can-reinforce-bias-cloud-giants-announce-tools-for-ai-fairness/


 

Artículos relacionados

0 comentarios