Can We Open the Black Box of AI?

Can We Open the Black Box of AI?

Published By: Nature International Weekly Journal of Science, 10/5/2016

>> View the Article <<


Scientists attempt to understand how computers think and learn in order to verify the reliability of large scale data analysis. This article covers several efforts in the last few years to understand how deep neural nets work. If scientists can understand how computers gather and interpret data in deep learning, these techniques can be used with more confidence, in day to day applications as well as in cutting edge scientific research.

Extended Discussion Questions

  • What concerns could arise from neglecting to try to understand how computers learn?
  • How could relying on computers to interpret data impact developments in science? Can you think of any examples you know of?
  • In what ways are humans better at solving problems than computers? Conversely, how are computers better than humans at solving problems?
  • What assumptions are researchers making when attempting to solve the black-box problem?

Relating This Story to the CSP Curriculum Framework

Global Impact Learning Objectives:

  • LO 7.2.1 Explain how computing has impacted innovations in other fields.

Global Impact Essential Knowledge:

  • EK 7.2.1A Machine learning and data mining have enabled innovation in medicine, business, and science.
  • EK 7.2.1B Scientific computing has enabled innovation in science and business.
  • EK 7.2.1G Advances in computing as an enabling technology have generated and increased the creativity in other fields.

Other CSP Big Ideas:

  • Idea 2 Abstraction
  • Idea 4 Algorithms

Banner Image: “Network Visualization – Violet – Crop 4”, derivative work by ICSI. New license: CC BY-SA 4.0. Based on “Social Network Analysis Visualization” by Martin Grandjean. Original license: CC BY-SA 3.0

Home Forums Can We Open the Black Box of AI?

  • You must be logged in to reply to this topic.