作者:爱游戏体育官网 发布时间:2023-03-15 22:38
本文摘要:A lot of big claims are made about the transformative power of artificial intelligence. But it is worth listening to some of the big warnings too. Last month, Kate Crawford, principal researcher at Microsoft Research, warned that the incre


A lot of big claims are made about the transformative power of artificial intelligence. But it is worth listening to some of the big warnings too. Last month, Kate Crawford, principal researcher at Microsoft Research, warned that the increasing power of AI could result in a “fascist’s dream” if the technology were misused by authoritarian regimes.关于人工智能的变革威力,人们明确提出了很多大胆的设想。但我们也有适当讲出一些相当严重警告。

上月,微软公司研究院(Microsoft Research)首席研究员凯特?克劳福德(Kate Crawford)警告称之为,如果被威权政府欺诈,威力与日俱增的人工智能可能会引致一场“法西斯梦”。“Just as we are seeing a step function increase in the speed of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” Ms Crawford told the SXSW tech conference.克劳福德在SXSW科技大会上回应:“就在我们看见人工智能的发展速度呈圆形阶梯型上升时,其他一些事情也在再次发生:极端民族主义、右翼威权主义和法西斯主义兴起。”The creation of vast data registries, the targeting of population groups, the abuse of predictive policing and the manipulation of political beliefs could all be enabled by AI, she said.她回应,人工智能有可能带给可观的数据登记册、针对特定人口群体、欺诈预测型警务以及操控政治信仰。

Ms Crawford is not alone in expressing concern about the misapplication of powerful new technologies, sometimes in unintentional ways. Sir Mark Walport, the British government’s chief scientific adviser, warned that the unthinking use of AI in areas such as the medicine and the law, involving nuanced human judgment, could produce damaging results and erode public trust in the technology.克劳福德并不是唯一对强劲的新技术被错误用于(有时以意想不到的方式)深感忧虑的人。英国政府首席科学顾问马克?沃尔波特(Mark Walport)警告称之为,在医学和法律等牵涉到细致人类辨别的领域不假思索地用于人工智能,有可能带给破坏性结果,并风化公众对这项技术的信任。Although AI had the potential to enhance human judgment, it also risked baking in harmful prejudices and giving them a spurious sense of objectivity. “Machine learning could internalise all the implicit biases contained within the history of sentencing or medical treatment — and externalise these through their algorithms,” he wrote in an article in Wired.尽管人工智能有强化人类辨别的潜力,但它也有可能带给危害的种族主义,并产生一种错误的客观感觉。他在《连线》(Wired)杂志的一篇文章中写到:“机器学习可能会内部化在量刑或医疗历史中不存在的所有隐性种族主义,并通过它们的算法外部化。

”As ever, the dangers are a lot easier to identify than they are to fix. Unscrupulous regimes are never going to observe regulations constraining the use of AI. But even in functioning law-based democracies it will be tricky to frame an appropriate response. Maximising the positive contributions that AI can make while minimising its harmful consequences will be one of the toughest public policy challenges of our times.就像仍然以来的情况那样,辨识危险性依然要比消弭危险性更容易得多。没底线的政权总有一天会遵从容许人工智能用于的规定。然而,即便在长时间运转的基于法律的民主国家,框定必要的对此也很棘手。

将人工智能可以作出的大力贡献最大化,同时将其危害后果降到低于,将是我们这个时代最艰难的公共政策挑战之一。For starters, the technology is difficult to understand and its use is often surreptitious. It is also becoming increasingly hard to find independent experts, who have not been captured by the industry or are not otherwise conflicted.首先,人工智能技术很难解读,其用途往往具有神秘色彩。

寻找仍未被行业挤到、且不不存在其他利益冲突的独立国家专家也显得更加无以。Driven by something approaching a commercial arms race in the field, the big tech companies have been snapping up many of the smartest academic experts in AI. Much cutting-edge research is therefore in the private rather than public domain.受到该领域类似于商业军备竞赛的竞争的推展,大型科技公司仍然在争夺战人工智能领域很多最杰出的学术专家。因此,很多领先研究坐落于私营部门,而非公共部门。

To their credit, some leading tech companies have acknowledged the need for transparency, albeit belatedly. There has been a flurry of initiatives to encourage more policy research and public debate about AI.有一点认同的是,一些领先科技公司认识到了半透明的必要性,尽管有些姗姗来迟。还有一连串倡议希望对人工智能进行更加多政策研究和公开发表辩论。Elon Musk, founder of Tesla Motors, has helped set up OpenAI, a non-profit research company pursuing safe ways to develop AI.特斯拉汽车(Tesla Motors)创始人埃隆?马斯克(Elon Musk)协助创立了非盈利研究机构OpenAI,致力于以安全性方式研发人工智能。Amazon, Facebook, Google DeepMind, IBM, Microsoft and Apple have also come together in Partnership on AI to initiate more public discussion about the real-world applications of the technology.亚马逊(Amazon)、Facebook、谷歌(Google) DeepMind、IBM、微软公司(Microsoft)和苹果(Apple)也牵头发动Partnership on AI,以启动更加多有关该技术实际应用于的公开发表辩论。

Mustafa Suleyman, co-founder of Google DeepMind and a co-chair of the Partnership, says AI can play a transformative role in addressing some of the biggest challenges of our age. But he accepts that the rate of progress in AI is outstripping our collective ability to understand and control these systems. Leading AI companies must therefore become far more innovative and proactive in holding themselves to account. To that end, the London-based company is experimenting with verifiable data audits and will soon announce the composition of an ethics board to scrutinise all the company’s activities.谷歌DeepMind牵头创始人、Partnership on AI牵头主席穆斯塔法?苏莱曼(Mustafa Suleyman)回应,人工智能可以在应付我们这个时代一些仅次于挑战方面充分发挥变革性起到。但他指出,人工智能的发展速度多达我们解读和掌控这些系统的集体能力。因此,领先人工智能公司必需在对自己问责方面充分发挥极具创意和更加主动的起到。

为此,这家总部坐落于伦敦的公司正在尝试可验证的数据审核,并将迅速宣告一个道德委员会的包含,该委员会将审查该公司的所有活动。But Mr Suleyman suggests our societies will also have to devise better frameworks for directing these technologies for the collective good. “We have to be able to control these systems so they do what we want when we want and they don’t run ahead of us,” he says in an interview for the FT Tech Tonic podcast.但苏莱曼认为,我们的社会还必需设计更佳的框架,指导这些技术为集体利益服务。他在拒绝接受英国《金融时报》Tech Tonic播客的专访时回应:“我们必需需要掌控这些系统,使他们在我们期望的时间做到我们想要做到的事,而会自说自话。

”Some observers say the best way to achieve that is to adapt our legal regimes to ensure that AI systems are “explainable” to the public. That sounds simple in principle, but may prove fiendishly complex in practice.一些仔细观察人士回应,做这点的最佳方法是调整我们的法律制度,保证人工智能系统可以向公众“说明”。从应以说道,这听得上去很非常简单,但实际做到一起有可能十分简单。

Mireille Hildebrandt, professor of law and technology at the Free University of Brussels, says one of the dangers of AI is that we become overly reliant on “mindless minds” that we do not fully comprehend. She argues that the purpose and effect of these algorithms must therefore be testable and contestable in a courtroom. “If you cannot meaningfully explain your system’s decisions then you cannot make them,” she says.布鲁塞尔权利大学(Free University of Brussels)法律和科技学教授米雷列?希尔德布兰特(Mireille Hildebrandt)回应,人工智能的危险性之一是我们显得过度倚赖我们并不几乎解读的“不必脑子的智慧”。她指出,这些算法的目的和影响必需是可测试而且在法庭上是可争辩的。她回应:“如果你无法有意义地说明你的系统的要求,那么你就无法生产它们。

”We are going to need a lot more human intelligence to address the challenges of AI.我们将必须更好的人类智慧来应付人工智能挑战。