General Meeting 1: Artificial Intelligence

February 14, 2018: Robots Are A Little Bit Racist

Summary:

  • Machine learning algorithms are picking up on racist and sexist ideas embedded in the language patterns they are fed by human engineers.
  • If patterns from data are reflective of certain biases and those patterns are used to make decisions affecting people, then you end up with “unacceptable discrimination.”
  • Algorithms are expected to update their resources based on new and better information.

Discussion:

  • Is “intelligence” related to generativity (the ability to construct ideas) or to morality?
  • What are the potential damages of building machines that make predictions about people? What are the potential benefits?
    • AI being used to predict “dangerousness” of criminals, a la Psycho-Pass.
    • Is all bias inherently bad, or can some bias be good?
  • Should we be trying to build robots with the capacity for moral judgment?
    • If we do build machines with the capacity for moral judgment, what do we do when they get better at it than us?
    • If humans operate against our moral code daily, wouldn’t a truly intelligent machine do that as well?
  • How is the data these machines are using being mined? How can we improve upon this process to make it less biased?
  • Bias in AI can’t be avoided until its “root” in society is deconstructed. This being the case, is it still the responsibility of AI builders to try to remediate, or else compensate for, the issue?

Resources: