General Meeting 2: Mental Illness & Incarceration

March 21, 2018: Mental Illness & Incarceration

Summary:

  • People in a mental health crisis are more likely to encounter police than to get medical help; 15% of men and 30% of women in jails have a serious mental health condition.
  • Mentally ill inmates tend to remain in jail longer than other inmates.
  • Mentally ill inmates create behavioral management problems resulting in isolation and are more likely to commit suicide.

Discussion:

  • What do we know about how mental illness affects criminal involvement, incarceration, and recidivism after incarceration? Are there impacts to outcomes post-incarceration?
    • 5% of the world’s population have a mental illness, but 25% of the incarcerated population.
  • What should be done to remedy poor outcomes? Are proposed solutions realistic? What could make them realistic?
    • How can we keep state-run hospitals or mental health institutions from becoming prison-like?
      • Public percetions of “state hospitals” or “psych wards” are not good. Major issues are often financial: privatization, budget
    • Mental health care can’t just be a consideration for only the “most severe” patients. Anyone with mental illness or at risk of mental illness should be able to access treatment.
    • It is evident that full recovery is possible, but is made difficult under current American system due to systematic defunding of resources, lack of infrastructure.
  • What right do incarcerated individuals have to refuse treatment for mental illness?
    • Competence reviewed by a panel of psychologists.
    • People, having the right to autonomy, should have the right to refuse treatment, but that treatment should remain available to them.
      • Part of incarceration is a loss of some liberties. Is refusing treatment actually a fundamental right, or is it a privilege?
  • Does the judiciary system perpetuate the stigma around mental illness? How?
    • One aspect of the stigma is that mental illness is inherently dangerous.
    • The media and judiciary system may work in a feedback loop, where media feeds popular rhetoric and judicial system falls prey to that. (eg. gun violence rhetoric).
    • Stigma might die down naturally as education becomes more common.
    • People are still going to make associations based on what they see whenever criminals are represented in the media. If there is money to be made, people are going to make it. Demonizing people creates an interesting story- people tend to look for an “othering” factor, a source and a symptom. An exaggerated of traits associated with illness, especially the “unnamed” mental illnesses in media.”
    • Success stories don’t make headlines.
  • Does the government have an interest in regulating how people are treated?
    • Ties to the Universal Healthcare debate.
    • Preventing recidivism is in the individual’s and the government’s best interest. If treatment can accomplish that, then yes.
    • Is medicating people actually fixing the problem?
  • In advocating for treatment over incarceration, how do we prevent the same mistreatment of patients and their families in mental hospitals that we have seen before?
    • Well-trained and adequate staffing. Pushing for personalized healthcare.
    • Biggest problem will always be money.

Resources:

General Meeting 1: Artificial Intelligence

February 14, 2018: Robots Are A Little Bit Racist

Summary:

  • Machine learning algorithms are picking up on racist and sexist ideas embedded in the language patterns they are fed by human engineers.
  • If patterns from data are reflective of certain biases and those patterns are used to make decisions affecting people, then you end up with “unacceptable discrimination.”
  • Algorithms are expected to update their resources based on new and better information.

Discussion:

  • Is “intelligence” related to generativity (the ability to construct ideas) or to morality?
  • What are the potential damages of building machines that make predictions about people? What are the potential benefits?
    • AI being used to predict “dangerousness” of criminals, a la Psycho-Pass.
    • Is all bias inherently bad, or can some bias be good?
  • Should we be trying to build robots with the capacity for moral judgment?
    • If we do build machines with the capacity for moral judgment, what do we do when they get better at it than us?
    • If humans operate against our moral code daily, wouldn’t a truly intelligent machine do that as well?
  • How is the data these machines are using being mined? How can we improve upon this process to make it less biased?
  • Bias in AI can’t be avoided until its “root” in society is deconstructed. This being the case, is it still the responsibility of AI builders to try to remediate, or else compensate for, the issue?

Resources: