Tim Clement-Jones -金博宝188手机app最新下载 技术的法律边缘 //www.zs-weijie.com. 全球技术部门博客 星期三,03 Mar 2021 14:48:01 +0000 念头美国 每小时 1 https://wordpress.org/?v=5.8&lxb_maple_bar_source=lxb_maple_bar_source. https://technologyslegaledgefullservice.dlapiperblogs.com/wp-content/uploads/sites/19/2019/02/cropped-Favicon_512x512-32x32.gif Tim Clement-Jones -金博宝188手机app最新下载 技术的法律边缘 //www.zs-weijie.com. 32 32 规范人工智能:我们现在在哪里?我们前往哪里? //www.zs-weijie.com/2021/03/regulating-artificial-intelligence-where-are-we-now-where-are-we-heading/ Annabel Ashby,Imran Syed和Tim Clement-Jones 星期三,03 Mar 2021 14:48:01 +0000 人工智能 //www.zs-weijie.com/?p=6165

硬法律还是软法律?人工智能监管成为热门话题并不令人意外。人工智能正在迅速被采用,有关高调人工智能决策的新闻报道频繁出现,大量的指导意见和监管建议让感兴趣的各方消化,似乎具有挑战性。我们现在在哪里?我们能…< a href = " //www.zs-weijie.com/2021/03/regulating-artificial-intelligence-where-are-we-now-where-are-we-heading/ " >继续阅读…< / > < / p >

硬或软法律吗

人工智能的调控是一个热门话题是不足为奇的。AI在速度被采用,新闻报道中经常出现有关高调AI的决策,并指导和有关方面的监管提案数量庞大的消化似乎挑战。

我们现在在哪里?我们可以预计在今后的监管方面是什么?哪些可能会符合“道德” AI意味着什么?

高级AI的道德原则,经合组织,欧盟和20国集团在2019年如下面解释取得大踏步是在2020年取得的主要机构工作捕捉提出了新的法规和操作流程这些原则。2021无疑将继续保持这一势头,因为这些举措继续他们的旅程,进一步指导和一些硬法。

与此同时,随着调控追赶与现实(经常出现的情况下技术创新而言),行业一直致力于通过开发自愿行为守则,以提供保证。虽然这是有益的,值得称道的是,监管者采取的观点较为一致,基于风险的监管是最好自愿最佳做法。

我们在下面简要说明最显著的举措,但首先它是值得理解什么调控可能看起来像使用AI的组织。

调节AI

当然,魔鬼将在细节,但是从最有影响力的论文的分析在全球范围内发现,是调控的可能预cursers共同主题。它揭示,在概念上,AI的调节是相当简单的,并且具有三个关键部件:

  • 来实现列明标准;
  • 创建记录保存义务; and
  • possible certification following audit of those records, which will all be framed by a risk-based approach.

Standards

Quality starts with the governance process around an organisation’s decision to use AI in the first place (does it, perhaps, involve an ethics committee? If so, what does the committee consider?) before considering the quality of the AI itself and how it is deployed and operated by an organisation.

Key areas that will drive standards in AI include the quality of the training data used to teach the algorithm (flawed data can “bake in” inequality or discrimination), the degree of human oversight, and the accuracy, security and technical robustness of the IT. There is also usually an expectation that certain information be given to those affected by the decision-making, such as consumers or job applicants. This includes explainability of those decisions and an ability to challenge them – a process made more complex when decisions are made in the so-called “black box” of a neural network. An argument against specific AI regulation is that some of these quality standards are already enshrined in hard law, most obviously in equality laws and, where relevant, data protection. However, the more recent emphasis on ethical standards means that some aspects of AI that have historically been considered soft nice-to-haves may well develop into harder must-haves for organisations using AI. For example, the Framework for Ethical AI adopted by the European Parliament last Autumn includes mandatory social responsibility and environmental sustainability obligations.

Records

To demonstrate that processes and standards have been met, record-keeping will be essential. At least some of these records will be open to third-party audit as well as being used for self due diligence. Organisations need a certain maturity in their AI governance and operational processes to achieve this, although for many it will be a question of identifying gaps and/or enhancing existing processes rather than starting from scratch. Audit could include information about or access to training data sets; evidence that certain decisions were made at board level; staff training logs; operational records, and so on. Records will also form the foundation of the all-important accountability aspects of AI. That said, AI brings particular challenges to record-keeping and audit. This includes an argument for going beyond singular audits and static record-keeping and into a more continuous mode of monitoring, given that the decisions of many AI solutions will change over time, as they seek to improve accuracy. This is of course an appeal of moving to AI, but creates potentially greater opportunity for bias or errors to be introduced and scale quickly.

Certification

A satisfactory audit could inform AI certification, helping to drive quality and build up customer and public confidence in AI decision-making necessary for successful use of AI. Again, although the evolving nature of AI which “learns” complicates matters, certification will need to be measured against standards and monitoring capabilities that speak to these aspects of AI risk.

Risk-based approach

Recognising that AI’s uses range from the relatively insignificant to critical and/or socially sensitive decision-making, best practice and regulatory proposals invariably take a flexible approach and focus requirements on “high-risk” use of AI. This concept is key; proportionate, workable, regulation must take into account the context in which the AI is to be deployed and its potential impact rather than merely focusing on the technology itself.

Key initiatives and Proposals

Turning to some of the more significant developments in AI regulation, there are some specifics worth focussing in on:

OECD

The OECD outlined its classification of AI systems in November with a view to giving policy-makers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context (i.e. sector, stakeholder, purpose etc); data and input; AI model (i.e. neural or linear? Supervised or unsupervised?); and tasks and output (i.e. what does the AI do?). Read more here.

Europe

Several significant proposals were published by key institutions in 2020.

In the Spring, the European Commission’s White Paper on AI proposed regulation of AI by a principles-based legal framework targeting high-risk AI systems. It believes that regulation can underpin an AI “Eco-system of Excellence” with resulting public buy-in thanks to an “Eco-system of Trust.” For more detail see our 2020 client alert. Industry response to this proposal was somewhat lukewarm, but the Commission seems keen to progress with regulation nevertheless.

In the Autumn the European Parliament adopted its Framework for Ethical AI, to be applicable to “AI, robotics and related technologies developed, deployed and or used within the EU” (regardless of the location of the software, algorithm or data itself). Like the Commission’s White Paper, this proposal also targets high-risk AI (although what high-risk means in practice is not aligned between the two proposals). As well as the social and environmental aspects we touched upon earlier, notable in this proposed Ethical Framework is the emphasis on human oversight required to achieve certification. Concurrently the European Parliament looked at IP ownership for AI generated creations and published its proposed Regulation on liability for the Operation of AI systems which recommends, among other things, an update of the current product liability regime.

Looking through the lens of human rights, the Council of Europe considered the feasibility of a legal framework for AI and how that might best be achieved. Published in December, its report  identified gaps to be plugged in the existing legal protection (a conclusion which had also been reached by the European Parliamentary Research Services, which found that existing laws, though helpful, fell short of the standards required for its proposed AI Ethics framework). Work is now ongoing to draft binding and non-binding instruments to take this study forward.

United Kingdom   

The AI Council’s AI Roadmap sets out recommendations for the strategic direction of AI to the UK government. That January 2021 report covers a range of areas; from promoting UK talent to trust and governance. For more detail read the executive summary.

Only a month before, in December 2020, the House of Lords had published AI in the UK: No room for complacency, a report with a strong emphasis on the need for public trust in AI and the associated issue of ethical frameworks. Noting that industry is currently self-regulating, the report recommended sector regulation that would extend to practical advice as well as principles and training. This seems to be a sound conclusion given that the Council of Europe’s work included the review of over 100 ethical AI documents which, it found, started from common principles but interpreted these very differently when it came to operational practice.

The government’s response to that report has just been published. It recognises the need for public trust in AI “including embedding ethical principles against a consensus normative framework.” The report promotes a number of initiatives, including the work of the AI Council and Ada Lovelace Institute, who have together been developing a legal framework for data governance upon which they are about to report.

The influential Centre for Data Ethics and Innovation published its AI barometer and its Review into Bias in Algorithmic Decision-Making. Both reports make interesting reading; the barometer report looking at risk and regulation across a number of sectors. In the context of regulation, it is notable that CDEI does not recommend a specialist AI regulator for the UK but seems to favour a sectoral approach if and when regulation is required.

Regulators

Regulators are interested in lawful use, of course, but are also are concerned with the bigger picture. Might AI decision-making disadvantage certain consumers? Could AI inadvertently create sector vulnerability thanks to overreliance by the major players on any particular algorithm and/or data pool (the competition authorities will be interested in this aspect too). The UK’s Competition and Markets Authority published research into potential AI harms in January and is calling for evidence as to the most effective way to regulate AI. Visit the CMA website here.

The Financial Conduct Authority will be publishing a report into AI transparency in financial services imminently. Unsurprisingly, the UK’s data protection regulator has published guidance to help organisations audit AI in the context of data protection compliance, and the public sector benefits from detailed guidance from the Turing Institute.

Regulators themselves are now becoming more a focus. The December House of Lords report also recommended regulator training in AI ethics and risk assessment. As part of its February response, the government states that the Competition and Markets Authority, Information Commissioner’s Office and Ofcom have together formed a Digital Regulation Cooperation Forum (DRCF) to cooperate on issues of mutual importance and that a wider forum of regulators and other organisations will consider training needs.

2021 and beyond

In Europe we can expect regulatory developments to develop at pace in 2021, despite concerns from Denmark and others that AI may become over regulated. As we increasingly develop the tools for classification and risk assessment, the question, therefore, is less about whether to regulate but more about which applications, contexts and sectors are candidates for early regulation.