当前位置: 首页 > 工具软件 > Girls-In-AI > 使用案例 >

ai智能预测_人工智能预测策略:有效还是不公平?

乜裕
2023-12-01

ai智能预测

In 2014, 18-year-old Brisha Borden and a friend made an impulse decision to take an unattended scooter and bike, which they immediately returned after the owner of said items showed up. Nevertheless, the girls were charged for burglary and petty theft for the items they stole, worth $80. In the previous year, 41-year-old Vernon Prater, with previous charges of armed robbery and a 5 year sentence in prison, was arrested for shoplifting a Home Depot with goods worth $86.35, similar to Borden.

2014年,年仅18岁的Brisha Borden和一个朋友冲动决定购买一辆无人看管的踏板车和自行车,当上述物品的所有者出现后,他们立即将其归还。 然而,这些女孩因偷走的物品而被指控入室盗窃和小偷窃,价值80美元。 去年,41岁的弗农·普拉特(Vernon Prater)因涉嫌盗窃家得宝(Home Depot)货物,价值86.35美元,与博登(Borden)类似,被控以武装抢劫罪和5年徒刑。

However, when in jail, Borden, being black, was assigned the label “high risk” for future convictions, and Parker, being white, was assigned “low risk”, despite their criminal histories indicating the opposite. So why were police departments making inaccurate, biased predictions? Were officers making these calls?

但是,入狱时,黑人的博登被定为将来定罪的“高风险”标签,而白人的帕克被定为“低风险”的标签,尽管他们的犯罪历史表明情况相反。 那么,警察部门为何做出不准确,有偏见的预测? 是官员打这些电话吗?

Not quite.

不完全的。

Borden and Parker’s risk assessments were done by an Artificial Intelligence algorithm—COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)—meant to predict future crime by convicted individuals, where the results are given to a judge in order to aid them in deciding their sentences. Risk Assessments are part of a larger subset of AI technology being used in Law Enforcement, commonly known as predictive policing.

Borden和Parker的风险评估是通过人工智能算法COMPAS(替代性制裁的更正罪犯管理分析)完成的,该算法可预测被定罪人员的未来犯罪,并将结果提供给法官,以帮助他们确定他们的判决。 。 风险评估是执法中使用的AI技术更大子集的一部分,通常被称为预测性警务。

Predictive policing has been a discussed topic for decades now, but has only been widely implemented in law enforcement relatively recently. The process of predictive policing involves Artificial Intelligence algorithms analyzing large data sets of criminal activity, including arrest and conviction rates as well as demographics of certain areas, in order to determine the at-risk rate for individuals’ future convictions and to determine the concentration of police force in areas based on their rates of crime.

预测性警务是数十年来一直被讨论的话题,但是相对来说,直到最近才在执法中广泛实施。 预测性警务过程涉及人工智能算法,该算法分析犯罪活动的大数据集,包括逮捕和定罪率以及某些地区的人口统计资料,以确定个人未来定罪的危险率并确定犯罪的集中度。根据犯罪率在各个地区派出警察。

In theory, predictive policing seems like a win-win: avoiding the bias of human error as well as promoting efficiency within Police Departments. But the truth is far from this idealistic take on a technology with the potential to have an extremely large impact on many lives and communities in the US.

从理论上讲,预测性警务似乎是双赢:避免人为错误的偏见,并提高警察部门的效率。 但是,事实并非如此,这种理想主义的技术可能会对美国的许多生活和社区产生巨大的影响。

Because the AI algorithms base their predictions using historical crime data sets—including data from periods where police departments engaged in unlawful, racially and socioeconomically biased practices—areas that previously have had high rates of crime will automatically be assigned the label “high risk neighborhood”. This leaves such areas unable to reform their image, as being assigned the “crime-ridden” label by the AI algorithm causes the area to be overpoliced, subsequently increasing their rates of arrest and conviction and ultimately perpetuating systemic bias through the unending cycle of policing-arrest-risk. Furthermore, individuals with little previous significant criminal record, like Borden, are assigned the image of “high risk of recidivism” solely due to their racial and socioeconomic background.

由于AI算法使用历史犯罪数据集( 包括来自警察部门从事非法,种族和社会经济偏见行为的时期的数据)进行预测,因此以前具有高犯罪率的地区将被自动分配为“高风险社区”标签。 这样一来,这些区域就无法重塑形象,因为AI算法为该区域分配了“犯罪缠身”标签,导致该区域超负荷运转,从而增加了逮捕和定罪的速度,并最终在无休止的维持治安周期中使系统偏见永久化-逮捕风险。 此外,仅仅由于他们的种族和社会经济背景,以前很少有重大犯罪记录的人(例如Borden)被赋予“高再犯风险”的形象。

Take PredPol, for example. A company based in Santa Cruz, PredPol uses predictive analytics in order to assist law enforcement to predict future criminal activity, essentially dividing cities into sections and assigning them a sort of “crime forecast” in order to determine the concentration of police force in a given area. According to the US Department of Justice Figures, black people are five times as likely to be arrested than white people. This means that the data that the PredPol algorithms draw upon are biased regardless of their supposed impartiality, defeating a major purpose of using such technology: to avoid human partisanship.

以PredPol为例。 PredPol是一家总部位于圣克鲁斯的公司,使用预测分析来协助执法部门预测未来的犯罪活动,从本质上将城市划分为多个部分,并为其分配一种“犯罪预测”,以便确定给定警察部队的集中度。区。 根据美国司法部的数字, 黑人被捕的可能性是白人的五倍 。 这意味着PredPol算法所使用的数据存在偏差,无论其假定的公正性如何,都违反了使用此类技术的主要目的:避免人为参与。

The accuracy of predictive policing is not limited to only systemic bias. For instance, if an area had an unusually large rate of crime for a single day, such as a mass murder, then the area would be assigned as “high risk”, causing law enforcement to increase the magnitude of police force in said area. With more officers monitoring citizens, more are bound to be arrested, thus cementing the area into the “high risk” image, showing how one instance can have a permanent impact on a location if predictive policing continues to be heavily utilized by law enforcement.

预测性策略的准确性不仅限于系统性偏见。 例如,如果某个地区一天之内的犯罪率异常高,例如大屠杀,那么该地区将被指定为“高风险”,从而导致执法部门增加了该地区警察的人数。 随着越来越多的官员监视公民,势必会逮捕更多人,从而使该地区成为“高风险”形象,显示出如果执法部门继续大量使用预测性警务,一个实例将如何对该地点产生永久性影响。

The AI cannot determine on its own what is biased/inaccurate; it can only interpret and spit out what data was fed to it. With a lack of historically unbiased data, the possibility of an unbiased predictive policing scheme seems unlikely in the future.

认可机构不能自行确定什么是有偏见/不正确的; 它只能解释并吐出什么数据。 由于缺乏历史上无偏见的数据,未来似乎不太可能采用无偏见的预测性警务方案。

So is the technology salvageable? Debatable. To decide upon the future of people’s lives and communities as a whole based off of the judgement of biased algorithms does not seem fair, yet many police departments still use the software. Perhaps in the coming decades, the data that the AI use may be less skewed as it is today, but to think the technology will be able to account for all of society’s nuances is not a feasible expectation, but only time will tell.

那么技术是可挽救的吗? 值得商。的。 根据偏见算法的判断来决定整个人的生活和社区的未来似乎不公平,但是许多警察部门仍在使用该软件。 也许在未来的几十年中,人工智能使用的数据可能不会像今天那样偏斜,但是认为该技术将能够解决社会的所有细微差别并不是一个可行的期望,只有时间会证明一切。

Sources

资料来源

Predictive policing algorithms are racist. They need to be dismantled.”, MIT Technology Review, Will Douglas Heaven, 17 Jul. 2020

预测性警务算法是种族主义的。 他们需要被拆除。 ”,《麻省理工学院技术评论》,威尔道格拉斯·天堂,2020年7月17日

Why Hundreds of Mathematicians are Boycotting Predictive Policing”, Popular Mechanics, Courtney Linder, 20 Jul. 2020

为何数百名数学家抵制预测性的警务 ”,《大众力学》,考特尼·林德,2020年7月20日

Machine Bias”, ProPublica, Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, 23 May 2016

Machine Bias ”,ProPublica,Julia Angwin,Jeff Larson,Surya Mattu和Lauren Kirchner,2016年5月23日

Predictive policing is a scam that perpetuates systemic bias”, The Next Web, Tristan Greene, 21 Feb. 2019

预测性警务是使系统性偏见长期存在的骗局 ”,The Next Web,特里斯坦·格林(Tristan Greene),2019年2月21日

翻译自: https://medium.com/the-black-box/artificial-intelligence-predictive-policing-efficient-or-unfair-fe731962306d

ai智能预测

 类似资料: