Imran Syed-Te金博宝188手机app最新下载chnology的法律边缘 //www.zs-weijie.com 全球科技博客 星期五,Apr 30 2021 13:27:15 +0000 en - us 每小时 1 https://wordpress.org/?v=5.7.1&lxb_maple_bar_source=lxb_maple_bar_source https://technologyslegaledergefullservice.dlapiperblogs.com/wp-content/uploads/sites/19/2019/02/cropped-favicon_512x512-32x32.gif. Imran Syed-Te金博宝188手机app最新下载chnology的法律边缘 //www.zs-weijie.com 32 32 技术的未来规则:速读 //www.zs-weijie.com/2021/04/the-future-regulation-of-technology-speed-read/ Imran Syed. 星期五,4月30日2021年13:25:45 +0000 人工智能 //www.zs-weijie.com/?p=6238 上周,欧盟委员会公布了人工智能法规的建议,这可能会对你们的技术支持的商业和生活产生深远的影响。这不是监管的一个小众领域,它有可能广泛应用于从智能产品到社交媒体平台和联网零售商的所有领域。这注意提炼出了…< a href = " //www.zs-weijie.com/2021/04/the-future-regulation-of-technology-speed-read/ " >继续阅读…< / > < / p > 上周,欧盟委员会公布了人工智能法规的建议,这可能会对你们的技术支持的商业和生活产生深远的影响。这不是一个规定的利基角,可能会对从智能产品到社交媒体平台和连通零售商的所有内容都有广泛的应用。

本说明将该提案蒸发到几个关键点,以便快速进入建议新监管的主要领域涵盖,这将在未来几个月内发生热烈讨论。  We will be looking at this further from a number of perspectives over the coming weeks.

This is your speed read:

  • Concept: The proposed Regulation forms the foundation of the EU’s “ecosystem of trust” in AI.  A few uses of AI will be prohibited altogether.  Otherwise the idea is to impose common standards on those AI systems which affect EU citizens in the most significant ways.   From a lawyer’s point of view the EU proposes to achieve these aims by risk based regulation similar, in its approach, to the existing data protection and product safety regimes.  
  • Extra territorial scope: Regulation will apply to high-risk AI which is available within the EU, used within the EU or whose output affects people in the EU.   Because the aim is to reassure and protect the key rights of EU citizens, it is irrelevant whether or not the provider or user is within the EU.   So, for example, where the AI is hosted on a server outside of the EU and/or the decisions which the AI makes, or enhances, is an activity carried out outside of the EU, the regime can still apply.
  • Sanctions: Fines can be very substantial and reach up to €30 million or 6% of global turnover (the higher), which could be in addition to overlapping breaches such as the 4% of global turnover in GDPR.
  • AI: Artificial Intelligence is defined very broadly and will capture a number of technologies already in wide use. The definition is also future proofed because the proposal includes granting the Commission powers to update those techniques and approaches which fall within scope.
  • Prohibited practices: AI practices which have significant potential to contravene fundamental rights or which seek to manipulate or take advantage of certain categories of person will be prohibited – including general surveillance, adverse behavioural advertising and social scoring.
  • High risk AI systems: AI may be classified as high-risk because of safety implications or because it can impact fundamental rights; Annex III of Regulation includes a new list of AI systems deemed ‘High Risk’.   In practice, many common uses of technology will fall into this category and become subject to the full compliance regime.  For the remaining, non-high-risk AI, a few basic controls will apply but the Commission will also encourage voluntary codes of conduct and additional commitments by providers.
  • Providers: The most onerous controls which will apply to Providers – a person or organisation that develops an AI system, or that has an AI system developed, with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge .  Many of the controls which will apply to Providers will be familiar to those who have been tracking AI developments and proposals more generally: transparency; accuracy, robustness and security; accountability; testing; data governance and management practices; and human oversight.  EU specific aspects include the requirement for Providers to self-certify high-risk AI by carrying out conformity assessments and affixing the CE marking, that high-risk AI must be registered on a new EU database before first use / provision, and the requirement for Providers to have in place an incident reporting system and take corrective action for serious breaches / non-compliance.
  • Importers, Distributers and Users: Other participants in the high-risk AI value chain will also be subject to new controls.  E.g. Importers – will need to ensure that the Provider carried out the conformity assessment and drew up required technical documentation.  Users –  will be required to use the AI in accordance with its instructions, monitor it for problems (and flag any to the provider/distributor), keep logs, and so on.  Personal use is excluded from the User obligations.
  • Next steps: The proposal will be subject to intense scrutiny and lobbying.  It could still be amended before adoption by the European Council and European Parliament.  Timescales are difficult to predict but once finalised and formally published there will be a lead time of at least two years before it takes effect.  This will allow providers and others in the value chain time to ensure compliance in relation to existing AI.

More will follow from a UK perspective shortly.

规范人工智能:我们现在在哪里?我们要去哪里? //www.zs-weijie.com/2021/03/regulation-arte-intelligence-where-are-we-now-own-are-we-heading/ 安娜贝尔·阿什比,伊姆兰·赛义德和蒂姆·克莱门特-琼斯 星期三,03 Mar 2021 14:48:01 +0000 人工智能 //www.zs-weijie.com/?p=6165

硬法律还是软法律?人工智能监管成为热门话题并不令人意外。人工智能正在迅速被采用,有关高调人工智能决策的新闻报道频繁出现,大量的指导意见和监管建议让感兴趣的各方消化,似乎具有挑战性。我们现在在哪里?我们能…< a href = " //www.zs-weijie.com/2021/03/regulating-artificial-intelligence-where-are-we-now-where-are-we-heading/ " >继续阅读…< / > < / p >

硬法还是软法?

人工智能监管是一个热门话题并不令人惊讶。人工智能正以极快的速度被采用,关于人工智能决策的新闻报道频频出现,要让利益相关方消化的指导和监管建议数量之大似乎颇具挑战性。

我们现在在哪里?在未来的监管方面,我们能期待什么?那么遵守“道德”人工智能会带来什么?

2019年,经合组织、欧盟和G20制定了高水平的道德人工智能原则。如下所述,随着主要机构努力在拟议的新监管和运营流程中纳入这些原则,2020年取得了巨大进展。2021年无疑将继续保持这一势头,因为这些计划将继续向进一步的指导和一些硬性法律迈进。

与此同时,随着监管与现实的接轨(技术创新往往如此),行业已试图通过制定自愿性法规来提供保证。虽然这是有益的,值得称赞,但监管机构认为,与自愿的最佳实践相比,更一致的、基于风险的监管更可取。

我们概述了以下最重要的举措,但首先,有必要了解一个使用人工智能的组织的监管可能是什么样子。

监管人工智能

当然,细节是魔鬼,但对全球最有影响力的论文的分析揭示了可能是监管先兆的共同主题。它表明,从概念上讲,人工智能的监管相当简单,有三个关键组成部分:

  • 规定了要达到的标准/ul>

    标准

    质量首先从组织决定使用人工智能的治理过程开始(可能涉及道德委员会吗?如果是这样的话,委员会在考虑AI自身的质量和组织如何部署和运作之前考虑什么?

    < P>驱动AI标准的关键领域包括用于教授算法的训练数据的质量(有缺陷的数据可以在不平等或歧视中烘焙),人工监管的程度,以及IT的准确性、安全性和技术稳健性。人们通常还期望向那些受决策影响的人提供某些信息,例如消费者或求职者。这包括这些决策的可解释性和挑战它们的能力——当在神经网络的所谓“黑箱”中做出决策时,这一过程变得更加复杂。反对特定人工智能监管的一个论点是,其中一些质量标准已经写入了硬性法律,最明显的是平等法以及相关的数据保护法。然而,最近对道德标准的强调意味着,对于使用人工智能的组织来说,人工智能的某些方面在历史上一直被认为是软的、好的,很可能会发展成为硬的必备品。例如,去年秋天欧洲议会通过的道德人工智能框架包括强制性社会责任和环境可持续性义务。

    记录

    为了证明流程和标准已得到满足,记录保存至关重要。这些记录中至少有一部分将接受第三方审计,并用于自我尽职调查。组织需要在其AI治理和运营流程中具有一定的成熟度才能实现这一目标,尽管对于许多组织来说,这将是一个确定差距和/或改进现有流程的问题,而不是从头开始。审计可以包括培训数据集的信息或访问;在董事会层面做出某些决定的证据;员工培训日志;操作记录等。记录也将形成AI的重要的问责制方面的基础。这就是说,人工智能给记录保存和审计带来了特殊的挑战。考虑到许多人工智能解决方案的决策将随着时间的推移而改变,因为它们寻求提高准确性,这包括一个超越单一审计和静态记录保存,进入更持续的监控模式的论点。这当然是转向人工智能的一种吸引力,但可能会为偏差或错误的引入和快速扩展创造更大的机会。

    认证

    令人满意的审计可以通知人工智能认证,帮助提高质量,建立客户和公众对AI决策的信心,这是成功使用AI所必需的。同样,尽管人工智能不断演变的“学习”性质使事情复杂化,但认证需要根据标准和监控能力来衡量speak to these aspects of AI risk.

    Risk-based approach

    Recognising that AI’s uses range from the relatively insignificant to critical and/or socially sensitive decision-making, best practice and regulatory proposals invariably take a flexible approach and focus requirements on “high-risk” use of AI. This concept is key; proportionate, workable, regulation must take into account the context in which the AI is to be deployed and its potential impact rather than merely focusing on the technology itself.

    Key initiatives and Proposals

    Turning to some of the more significant developments in AI regulation, there are some specifics worth focussing in on:

    OECD

    The OECD outlined its classification of AI systems in November with a view to giving policy-makers a simple lens through which to view the deployment of any particular AI system. Its classification uses four dimensions: context (i.e. sector, stakeholder, purpose etc); data and input; AI model (i.e. neural or linear? Supervised or unsupervised?); and tasks and output (i.e. what does the AI do?). Read more here.

    Europe

    Several significant proposals were published by key institutions in 2020.

    In the Spring, the European Commission’s White Paper on AI proposed regulation of AI by a principles-based legal framework targeting high-risk AI systems. It believes that regulation can underpin an AI “Eco-system of Excellence” with resulting public buy-in thanks to an “Eco-system of Trust.” For more detail see our 2020 client alert. Industry response to this proposal was somewhat lukewarm, but the Commission seems keen to progress with regulation nevertheless.

    In the Autumn the European Parliament adopted its Framework for Ethical AI, to be applicable to “AI, robotics and related technologies developed, deployed and or used within the EU” (regardless of the location of the software, algorithm or data itself). Like the Commission’s White Paper, this proposal also targets high-risk AI (although what high-risk means in practice is not aligned between the two proposals). As well as the social and environmental aspects we touched upon earlier, notable in this proposed Ethical Framework is the emphasis on human oversight required to achieve certification. Concurrently the European Parliament looked at IP ownership for AI generated creations and published its proposed Regulation on liability for the Operation of AI systems which recommends, among other things, an update of the current product liability regime.

    Looking through the lens of human rights, the Council of Europe considered the feasibility of a legal framework for AI and how that might best be achieved. Published in December, its report  identified gaps to be plugged in the existing legal protection (a conclusion which had also been reached by the European Parliamentary Research Services, which found that existing laws, though helpful, fell short of the standards required for its proposed AI Ethics framework). Work is now ongoing to draft binding and non-binding instruments to take this study forward.

    United Kingdom   

    The AI Council’s AI Roadmap sets out recommendations for the strategic direction of AI to the UK government. That January 2021 report covers a range of areas; from promoting UK talent to trust and governance. For more detail read the executive summary.

    Only a month before, in December 2020, the House of Lords had published AI in the UK: No room for complacency, a report with a strong emphasis on the need for public trust in AI and the associated issue of ethical frameworks. Noting that industry is currently self-regulating, the report recommended sector regulation that would extend to practical advice as well as principles and training. This seems to be a sound conclusion given that the Council of Europe’s work included the review of over 100 ethical AI documents which, it found, started from common principles but interpreted these very differently when it came to operational practice.

    The government’s response to that report has just been published. It recognises the need for public trust in AI “including embedding ethical principles against a consensus normative framework.” The report promotes a number of initiatives, including the work of the AI Council and Ada Lovelace Institute, who have together been developing a legal framework for data governance upon which they are about to report.

    The influential Centre for Data Ethics and Innovation published its AI barometer and its Review into Bias in Algorithmic Decision-Making. Both reports make interesting reading; the barometer report looking at risk and regulation across a number of sectors. In the context of regulation, it is notable that CDEI does not recommend a specialist AI regulator for the UK but seems to favour a sectoral approach if and when regulation is required.

    Regulators

    Regulators are interested in lawful use, of course, but are also are concerned with the bigger picture. Might AI decision-making disadvantage certain consumers? Could AI inadvertently create sector vulnerability thanks to overreliance by the major players on any particular algorithm and/or data pool (the competition authorities will be interested in this aspect too). The UK’s Competition and Markets Authority published research into potential AI harms in January and is calling for evidence as to the most effective way to regulate AI. Visit the CMA website here.

    The Financial Conduct Authority will be publishing a report into AI transparency in financial services imminently. Unsurprisingly, the UK’s data protection regulator has published guidance to help organisations audit AI in the context of data protection compliance, and the public sector benefits from detailed guidance from the Turing Institute.

    Regulators themselves are now becoming more a focus. The December House of Lords report also recommended regulator training in AI ethics and risk assessment. As part of its February response, the government states that the Competition and Markets Authority, Information Commissioner’s Office and Ofcom have together formed a Digital Regulation Cooperation Forum (DRCF) to cooperate on issues of mutual importance and that a wider forum of regulators and other organisations will consider training needs.

    2021 and beyond

    In Europe we can expect regulatory developments to develop at pace in 2021, despite concerns from Denmark and others that AI may become over regulated. As we increasingly develop the tools for classification and risk assessment, the question, therefore, is less about whether to regulate but more about which applications, contexts and sectors are candidates for early regulation.