Stuff South Africa https://stuff.co.za South Africa's Technology News Hub Tue, 19 Mar 2024 07:49:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 Stuff South Africa South Africa's Technology News Hub clean The digital tightrope walk for business and human rights https://stuff.co.za/2024/03/19/digital-tightrope-business-human-rights/ https://stuff.co.za/2024/03/19/digital-tightrope-business-human-rights/#respond Tue, 19 Mar 2024 07:49:36 +0000 https://stuff.co.za/?p=190922 Imagine a future where your access to justice depends on an algorithm, your freedom of expression is filtered through AI, and your personal data becomes a commodity traded without your consent. This is not a dystopian fantasy but a reality we are inching closer to as artificial intelligence (AI) becomes deeply integrated into our daily lives.

In an era where technology intertwines with daily life, AI emerges as a double-edged sword, cutting through the fabric of society with both promise and peril. As AI reshapes industries, it also casts a long shadow over fundamental human rights and ethical business practices. Consider the tale of a facial recognition system inaccurately flagging an innocent individual as a criminal suspect – and worse still, flagging individuals based on racial biases. Such instances underscore the urgent need for vigilance and responsibility in the age of AI.

The AI revolution and the rule of law

AI technologies are reshaping the legal landscape, introducing novel forms of digital evidence and altering traditional concepts of the rule of law. Courts worldwide grapple with the admissibility of AI-generated evidence, while law enforcement agencies increasingly rely on facial recognition and predictive policing tools, raising profound concerns about fairness, transparency, and accountability. The erosion of legal protections and standards in the face of AI’s opaque algorithms threatens the very foundation of justice, emphasising the need for regulatory frameworks that keep pace with technological advances.

The transformative power of AI in the legal domain is both fascinating and alarming. With the increasing spread of fake news, elections can be marred by misinformation, disinformation, and hate speech. AI advances can be key in orchestrating verification campaigns, as a pilot project conducted by the United Nations Development Programme in Zambia’s 2021 elections showed. In the United States, the use of AI in predictive policing and sentencing algorithms has sparked debate over fairness and bias. Studies, such as the 2016 ProPublica report, have highlighted how algorithms can inherit and amplify racial biases, challenging the very notion of impartial justice.

These issues underscore the necessity for legal systems worldwide to adapt and ensure AI technologies uphold the highest standards of equity, accuracy and transparency.

Intersectionality of AI and human rights

The impact of AI on human rights is far-reaching, affecting everything from freedom of expression to the right to privacy. For instance, social media algorithms can amplify or suppress certain viewpoints, while automated decision-making systems can deny individuals access to essential services based on biased data. Automated content moderation systems on social media platforms can also inadvertently silence marginalised voices, impacting freedom of speech. The deployment of mass surveillance technologies in countries like China similarly raises severe privacy concerns, illustrating the global need for AI governance that respects and protects individual rights.

These examples highlight the critical need for AI systems that are designed and deployed with a deep understanding of their human rights implications. Ensuring that AI technologies respect and promote human rights requires a concerted effort from developers, policymakers, and civil society.

Closer to home, the issue of digital and socioeconomic divides further complicates the intersectionality of AI and human rights. AI-driven solutions in healthcare and agriculture, for example, have shown immense potential to bridge socio-economic gaps. The balance between leveraging AI for societal benefits whilst protecting individual rights is a delicate one, necessitating nuanced governance frameworks.

Whilst these frameworks are still nascent in many jurisdictions around the world, the United Nations has prioritised efforts to secure the promotion, protection and enjoyment of human rights on the Internet. In 2021, the United Nations Human Rights Council adopted the UN resolution on the promotion, protection and enjoyment of human rights on the Internet, which resolution was heralded as a milestone and recognises that all of the rights people have offline must also be protected online.

This resolution came off the back of other UN resolutions, specifically condemning any measure to prevent or disrupt access to the internet and recognising the importance of access to information and privacy online for the realisation of the right to freedom of expression and to hold opinions without interference.

In 2023, the United Nations High Commissioner for Human Rights, Volker Türk, said the digital world was still in its early days. Around the world, more children and young people than ever before are online, either at home or at school, but depending on birthplace, not everyone has this chance.

The digital divide means a staggering 2.2 billion children and young people under 25 around the globe still do not have access to the Internet at home. They are being left behind, unable to access education and training, or news and information that could help protect their health, safety and rights. There is also a gap between girls and boys in terms of access to the Internet. He concluded by saying “It may be time to reinforce universal access to the Internet as a human right, and not just a privilege”.

Corporate responsibility in the AI era

For corporations in South Africa, Africa, and globally, AI introduces new risk areas that must be navigated with caution and responsibility. General Counsel, the world over, are required to investigate and implement strategies around issues of privacy, data protection, and non-discrimination which are paramount, as the misuse of AI can lead to significant reputational damage and legal liabilities. Corporations must adopt ethical AI frameworks and corporate social responsibility initiatives that prioritise human rights, demonstrating a commitment to responsible business practices in the digital age.

Corporations stand at the frontline of the AI revolution, bearing the responsibility to wield this powerful tool ethically. Google’s Project Maven, a collaboration with the Pentagon to enhance drone targeting through AI, faced internal and public backlash, leading to the establishment of AI ethics principles by the company. This example demonstrates the importance of corporate accountability and the potential repercussions of neglecting ethical considerations in AI deployment. It also highlights that influential corporations hold a significant level of leverage in their environments. This leverage should be used to progress respect for human rights across the value chain.

The challenge of regulation

Regulating AI presents a formidable challenge, particularly in Africa, where socio-economic and resource constraints are significant. The rapid pace of AI development often outstrips the ability of regulatory frameworks to adapt, leaving gaps that can be exploited to the detriment of society. Moreover, regulatory developments in the Global North often set precedents that may not be suitable for the African context, highlighting the need for regulations that are inclusive, contextually relevant, and capable of protecting citizens’ rights while fostering innovation.

The fast-paced evolution of AI technology poses a significant challenge to regulators, especially in the African context, where resources and expertise in technology governance are often limited. The European Union’s General Data Protection Regulation (GDPR) serves as a pioneering model for embedding principles of privacy and data protection in technology use, offering valuable lessons for African nations in crafting their regulatory responses to AI.

Towards a sustainable future

The path towards a sustainable future, where AI benefits humanity while safeguarding human rights, requires collaboration among businesses, regulators, and civil society. Stakeholders must work together to develop and implement guidelines and standards that ensure AI technologies are used ethically and responsibly. Highlighting examples of responsible AI use, such as initiatives that provide equitable access to technology or projects that leverage AI for social good, can inspire others to follow suit.

Collaboration is key to harnessing AI’s potential while safeguarding human rights and ethical standards. Initiatives like the Partnership on AI, which brings together tech giants, non-profits, and academics to study and formulate best practices on AI technologies, exemplify how collective action can lead to responsible AI development and use.

As AI and related technologies continue to transform our world, we must not lose sight of the human values that define us. The intersection of AI, business, and human rights presents complex challenges but also opportunities for positive change, not only for governments but for corporations too. By fostering ongoing dialogue and cooperation among all stakeholders, we can shape a future where technology serves humanity’s best interests, ensuring that the digital age is marked by innovation, equity, and respect for human rights. Corporate governance frameworks will need to adapt in response to these advances.

As Africa navigates the complexities of AI integration, the journey must be undertaken, byte by byte, with a steadfast commitment to ethical principles and human rights. The continent’s diverse tapestry of cultures and histories offers unique insights into responsible AI governance. By prioritising transparency, accountability, and inclusivity, African governments and corporations can lead the way in demonstrating how technology, guided by human values, can be a powerful tool for positive change. In the digital age, the fusion of innovation and ethics will define Africa’s trajectory, ensuring that AI becomes a catalyst for empowerment rather than a source of division.


Authors:

  • Pooja Dela-Cron is a Partner at Webber Wentzel
  • Paula-Ann Novotny is a Senior Associate at Webber Wentzel
]]>
https://stuff.co.za/2024/03/19/digital-tightrope-business-human-rights/feed/ 0
AI: we may not need a new human right to protect us from decisions by algorithms – the laws already exist https://stuff.co.za/2023/10/12/ai-not-need-a-new-human-right-to-protect-us/ Thu, 12 Oct 2023 07:02:35 +0000 https://stuff.co.za/?p=184579 There are risks and harms that come with relying on algorithms to make decisions. People are already feeling the impact of doing so. Whether reinforcing racial biases or spreading misinformation, many technologies that are labelled as artificial intelligence (AI) help amplify age-old malfunctions of the human condition.

In light of such problems, calls have been made to create a new human right against being subject to automated decision-making (ADM), which the UK Information Commissioner’s Office (ICO) describes as “the process of making a decision by automated means without any human involvement”.

Such systems rely on being exposed to data, whether factual, inferred, or created via profiling. But if effective regulation of ADM is the goal, creating new laws is probably not the way to go.

Our research suggests we should consider a different approach. Legal frameworks for data protection, non-discrimination, and human rights already offer protection to people from the negative impacts of ADM. Rules from these bodies of law can also guide regulation more generally. We could therefore focus on ensuring that the laws we already have are properly implemented.

Current harms and future risks

Automated decision making is being used in various ways – and there are more applications on the way. Areas subject to automation include the processing of asylum and welfare support applications and the deployment of lethal military technology. But even where ADM is considered to bring benefits, it can also have negative effects.

The criminalisation of children is one possible risk of using certain ADM systems, where “predictive risk models” used in child protection services can result in vulnerable children being further discriminated against. ADM can also make securing work harder –- a hiring algorithm developed by Amazon “scored female applicants more poorly than their equivalently qualified male counterparts.”

In several countries, including the UK, courts also rely on ADM. For example, it’s used to make sentencing recommendations, calculate the probability of a person reoffending, and assess the flight risk of defendants, which determines whether they will be released on bail pending trial.

These applications can result in unfair processes and unjust outcomes for many reasons. This could happen because a judge unwittingly accepts erroneous results produced by ADM, or because no one is able to understand how or why a particular system arrived at its conclusion.

Historically, human prejudices have also been embedded in the design of such software. This is because the algorithms are trained on real world data, often from the internet. Exposing the system to this information may help improve their performance at a task from one perspective, but the data also reflects people’s biases. This means that members of marginalised groups can end up being punished, in the way we saw earlier when women were disadvantaged by a hiring algorithm.

Protection and regulation

The urge to adopt new legal rules is perhaps understandable considering the stakes and the potential harm ADM could and does do. However, as regards creating a new human right, negotiating new laws takes time, money and resources. And once any new law comes into force it can take decades to be accurately understood for the purposes of practice.

Given that many relevant laws already exist, it’s unclear whether a new human right would significantly influence how systems for automated decision making are designed and deployed.

Yet without tangible implementation and enforcement, the content of these existing laws can become hollow. Effective governance of ADM by these laws requires impact assessments of automated decisions, human supervision of ADM systems, and complaints processes. These should all be mandated. A thorough impact assessment will be able to identify, for example, unintended harms to individuals and groups, and help shape appropriate mitigation measures.


Read More: AI-generated misinformation: 3 teachable skills to help address it


Yet these information gathering measures need to be accompanied by sufficient oversight by a competent, resourced, and – possibly – public body. This would help uphold democratic accountability. Such bodies would also be tasked with ensuring that people negatively affected by ADM could file complaints that are adequately dealt with. These steps would make current laws on data protection, non-discrimination, and human rights more meaningful and effective in protecting individuals and groups from the harms of automated decisions.

The law across many areas is often criticised – sometimes rightly – for struggling to adapt to change. But a merit of the law in general is its ability to provide recourse to people that have experienced wrongdoing. It provides principled teeth to take a bite out of unprincipled conduct.

This capacity is significant for another reason. Corporate spin regarding digital technologies matches how they are often portrayed in public. Commentary, too, frequently tends towards “hyperbole, alarmism, or exaggeration”. This hype complements practices such as ethics-washing that provide a means of feigning commitment to regulation, while ignoring the very laws capable of providing it.

Chatter about the likes of “AI ethics” grease the wheels of these strategies, sometimes turning nuanced and significant philosophical insights into box-ticking exercises. Ethics are an essential component of guiding the design, development, and deployment of automated decision making. However, the language of “ethics” can also be used by spin doctors to distract us.

If anything here is worth remembering, it’s that ADM is not only a future problem, it’s a present problem. The laws that exist now can be used to address pressing issues stemming from this technology.

Whether this happens depends on public and private bodies improving the procedural machinery needed to enforce and oversee legal rules. These rules, many of which have been around for a while, just need a bit more life breathed into them to function effectively.


]]>
South African court rules that clean air is a constitutional right: what needs to change https://stuff.co.za/2022/03/26/south-african-court-rules-that-clean-air-is-a-constitutional-right-what-needs-to-change/ Sat, 26 Mar 2022 08:11:15 +0000 https://stuff.co.za/?p=143706 A court in South Africa has confirmed the constitutional right of the country’s citizens to an environment that isn’t harmful to their health. This includes the right to clean air, as exposure to air pollution affects human health. Air pollution also affects land and water systems, and decreases agricultural yields.

The case, referred to as the “Deadly Air” case, was brought against the government by two environmental justice groups – groundWork and the Vukani Environmental Justice Movement in Action. They were represented by the Centre for Environmental Rights. The case concerned air pollution in the Highveld Priority Area. The area includes one of South Africa’s largest cities, Ekurhuleni, and a large portion of the Mpumalanga province.

Air pollution levels in the area are often over the legal thresholds specified in the National Ambient Air Quality Standards. These standards are set to protect health. Exceeding the threshold therefore indicates a health risk. There have been some small improvements in air quality in the area, but not enough to ensure that it’s in compliance with the established standards.

The fact that the standards were exceeded was a key aspect of the case and the judgement. The judgement declared that the poor air quality in this area:

is in breach of the residents’ section 24(a) constitutional rights to an environment that is not harmful to their health and well-being.

The case is important for a number of reasons. The first is that there was no penalty if air quality standards weren’t met even though the standards are set to protect health. The judgement highlights how important compliance with standards is as clean air is confirmed as a constitutional right.

The second is that the court’s finding that air quality is a constitutional right underscores the urgency with which South Africa needs to act. The hope is that the ruling will help unlock many of the challenges that have hindered improving air quality in this region and across the country.

Air pollution sources and solutions

The sources of air pollution in South Africa are diverse and complex. Managing them therefore requires a multi-sectoral approach.

When it come to pollution in the Highveld Priority Area, the focus is often placed on industrial emissions, especially from large emitters such as the state utility Eskom and chemical giant Sasol. But they aren’t the only sources of pollution in the area. And in many instances, the concentrations that South Africans breathe at ground-level are driven by other, closer sources. These include vehicles, veld fires, mining, waste burning, and burning of fuels such as wood or coal for cooking or heating.

The pollution levels are often highest in low-income settlements, urban areas, and areas close to large industries. Often, the highest levels of pollution are in vulnerable communities.

While it’s true that there are different sources of pollution across South Africa, most of the emissions are from the burning of fossil fuels. Approximately 86% of South Africa’s primary energy supply is from fossil fuels. In 2018, the total primary energy supply from renewable energy was 6%.

The contribution of fossil fuels to air pollution levels varies by place and time of year. But in many urban and industrialised areas, air pollution levels are dominated by emissions from the burning of fossil fuels.

The decarbonisation of South Africa’s energy system would therefore have large and rapid benefits to air quality.

A number of steps should be taken to get the process on the road.

What needs to be done

To improve air quality, the emissions of pollutants from a variety of sources must be decreased. This needs the involvement of different levels of government and coordination across numerous sectors and stakeholders.

Inadequate coordination among sectors has been a huge challenge in air quality management. This is due in part to the fact that improving air quality falls within the mandate of national as well as local government environment departments. But the sources of pollution and where policies and action are needed to decrease emissions, such as industry, mining, transport and energy, fall under other parts of the government to regulate.

To improve air quality, the active involvement of departments such as transport, mineral resources and energy, for example, are needed. In addition, local sources of pollution are often under the control of local government while regional sources such as large industries and pollution from highways are under provincial and national government.

Issues with local service delivery and waste management can lead to burning of waste that releases toxic pollutants right at ground level where people breathe. Thus effective air quality management stretches across sectors and levels of government.

This means that the various tiers of government need to be working in a co-ordinated way, which isn’t happening.


Read more: African countries need more air quality data – and sharing it unlocks its benefits


Another important step that needs to be taken is ensuring robust information on air pollution, especially the amount that is emitted, is available. This isn’t the case at the moment, which makes it difficult to track the trends of pollution.

For example, industrial emissions from regulated sources are collected by the Department of Forestry, Fisheries and Environment. But information on the amount emitted and the emission-reduction technologies that industries are using aren’t available. The importance of these data are highlighted in the court judgement.

This kind of information could make communities aware of the levels of pollution being emitted near them. In addition, scientists could use it to:

  • better simulate current air quality levels
  • assess the impacts of policies and interventions on air pollution
  • interpret the long term trends in the concentration of pollutants.

Experiences from other countries have shown that improving air quality takes dedication, resources and time but has large health, environment and economic benefits.

I’m hopeful that this court decision can help improve coordination and dedication across sectors in the development, implementation and enforcement of policies to improve air quality. This is urgently needed as South Africa tries to forge a path towards a just energy transition, which involves moving away from its heavy dependence on fossil fuels in a way that manages the negative effects on jobs and communities.

South Africa has stated its commitment to a just transition through its domestic plans and international partnerships.

At the time of publishing, the government hadn’t indicated whether it would appeal this landmark decision. As the decision can act as a catalyst for improved air quality in South Africa, it would be a shame if the government did appeal.

This article first appeared on The Conversation.

]]>
Facebook faces $150 billion class-action complaint over Myanmar Crisis https://stuff.co.za/2021/12/07/facebook-faces-150-billion-class-action-complaint-over-myanmar-crisis/ Tue, 07 Dec 2021 13:06:34 +0000 https://stuff.co.za/?p=138119 Myanmar has had a tumultuous history of civil unrest and cultural/religious violence, the most recent fallout of which being a military coup earlier this year. Back in 2017, after years of abuse, violence and vitriolic hatred at the hands of the Myanmar military, over 700,000 Rohingya Muslims fled out of the nation into Bangladesh in the largest human exodus in Asia since the Vietnam War. 

In 2018, a U.N. investigation found that Facebook had played a key role in distributing hate speech to stoke violence against the Rohingya people. Myanmar has a particularly large and active Facebook user-base, and social media platforms are known to be effective mobilising platforms for radicality and violence. Now Rohingya refugees are seeking $150 billion from Facebook and Meta, alleging that the social media goliath didn’t do enough to curb the spread of hate speech.

Due diligence

Two law firms, Edelson PC and Fields PLLC filed a class action suit against Meta (formerly Facebook Inc.) earlier this week, reports Reuters. Additionally, British lawyers sent a letter of notice to Facebook’s offices in London.

Facebook has said in the past that it was too slow in removing and preventing hate speech regarding the Rohingya in Myanmar, and has since upped its game (for example, by banning the Myanmar military from Instagram and Facebook following the 1 Feb coup). However, it has repeatedly defended itself legally, citing Section 230, a U.S. internet law that declares online platforms not liable for content posted by users. 

The Rohingya complaint seeks to apply local law to the case due to the tangible damages being done in Myanmar. Whether or not this will hold water for U.S. courts remains to be seen. Additionally, the complaint reportedly makes reference to recent claims by Facebook whistleblower Frances Haugen that the platform does not adequately moderate hate speech in countries where it is most likely to cause physical damage. 

]]>
We need concrete protections from artificial intelligence threatening human rights https://stuff.co.za/2021/09/27/artificial-intelligence-human-rights/ Mon, 27 Sep 2021 08:18:21 +0000 https://stuff.co.za/2021/09/27/artificial-intelligence-human-rights/ Events over the past few years have revealed several human rights violations associated with increasing advances in artificial intelligence (AI).

Algorithms created to regulate speech online have censored speech ranging from religious content to sexual diversity. AI systems created to monitor illegal activities have been used to track and target human rights defenders. And algorithms have discriminated against Black people when they have been used to detect cancers or assess the flight risk of people accused of crimes. The list goes on.

As researchers studying the intersection between AI and social justice, we’ve been examining solutions developed to tackle AI’s inequities. Our conclusion is that they leave much to be desired.

Ethics and values

Some companies voluntarily adopt ethical frameworks that are difficult to implement and have little concrete effect. The reason is twofold. First, ethics are founded on values, not rights, and ethical values tend to differ across the spectrum. Second, these frameworks cannot be enforced, making it difficult for people to hold corporations accountable for any violations.

Even frameworks that are mandatory — like Canada’s Algorithmic Impact Assessment Tool — act merely as guidelines supporting best practices. Ultimately, self-regulatory approaches do little more than delay the development and implementation of laws to regulate AI’s uses.

And as illustrated with the European Union’s recently proposed AI regulation, even attempts towards developing such laws have drawbacks. This bill assesses the scope of risk associated with various uses of AI and then subjects these technologies to obligations proportional to their proposed threats.

As non-profit digital rights organization Access Now has pointed out, however, this approach doesn’t go far enough in protecting human rights. It permits companies to adopt AI technologies so long as their operational risks are low.

Just because operational risks are minimal doesn’t mean that human rights risks are non-existent. At its core, this approach is anchored in inequality. It stems from an attitude that conceives of fundamental freedoms as negotiable.

So the question remains: why is it that such human rights violations are permitted by law? Although many countries possess charters that protect citizens’ individual liberties, those rights are protected against governmental intrusions alone. Companies developing AI systems aren’t obliged to respect our fundamental freedoms. This fact remains despite technology’s growing presence in ways that have fundamentally changed the nature and quality of our rights.

A side event at the 76th Session of the UN General Assembly on New Tech and Human Rights.

AI violations

Our current reality deprives us from exercising our agency to vindicate the rights infringed through our use of AI systems. As such, “the access to justice dimension that human rights law serves becomes neutralised”: A violation doesn’t necessarily lead to reparations for the victims nor an assurance against future violations, unless mandated by law.

But even laws that are anchored in human rights often lead to similar results. Consider the European Union’s General Data Protection Regulation, which allows users to control their personal data and obliges companies to respect those rights. Although an important step towards more acute data protection in cyberspace, this law hasn’t had its desired effect. The reason is twofold.

First, the solutions favoured don’t always permit users to concretely mobilize their human rights. Second, they don’t empower users with an understanding of the value of safeguarding their personal informationPrivacy rights are about much more than just having something to hide.

Addressing biases

These approaches all attempt to mediate between both the subjective interests of citizens and those of industry. They try to protect human rights while ensuring that the laws adopted don’t impede technological progress. But this balancing act often results in merely illusory protection, without offering concrete safeguards to citizens’ fundamental freedoms.

To achieve this, the solutions adopted must be adapted to the needs and interests of individuals, rather than assumptions of what those parameters might be. Any solution must also include citizen participation.

Legislative approaches seek only to regulate technology’s negative side effects rather than address their ideological and societal biases. But addressing human rights violations triggered by technology after the fact isn’t enough. Technological solutions must primarily be based on principles of social justice and human dignity rather than technological risks. They must be developed with an eye to human rights in order to ensure adequate protection.

One approach gaining traction is known as “Human Rights By Design.” Here, “companies do not permit abuse or exploitation as part of their business model.” Rather, they “commit to designing tools, technologies, and services to respect human rights by default.”

This approach aims to encourage AI developers to categorically consider human rights at every stage of development. It ensures that algorithms deployed in society will remedy rather than exacerbate societal inequalities. It takes the steps necessary to allow us to shape AI, and not the other way around.

  • Karine Gentelet — Professeure et titulaire de la Chaire Abeona-ENS-OBVIA en intelligence artificielle et justice sociale, Université du Québec en Outaouais (UQO)
  • Sarit K. Mizrahi — Ph.D. in Law Candidate, L’Université d’Ottawa/University of Ottawa
  • This article first appeared on The Conversation

]]>
UNGA76 Side Event - Private Sector Responsibilities to Promote Human Dignity Online nonadult
Why technology puts human rights at risk https://stuff.co.za/2018/07/05/why-technology-puts-human-rights-at-risk/ Thu, 05 Jul 2018 22:00:00 +0000 https://stuff.co.za2018/07/05/why-technology-puts-human-rights-at-risk/ Movies such as 2001: A Space OdysseyBlade Runner and Terminatorbrought rogue robots and computer systems to our cinema screens. But these days, such classic science fiction spectacles don’t seem so far removed from reality.

Increasingly, we live, work and play with computational technologies that are autonomous and intelligent. These systems include software and hardware with the capacity for independent reasoning and decision making. They work for us on the factory floor; they decide whether we can get a mortgage; they track and measure our activity and fitness levels; they clean our living room floors and cut our lawns.

Autonomous and intelligent systems have the potential to affect almost every aspect of our social, economic, political and private lives, including mundane everyday aspects. Much of this seems innocent, but there is reason for concern. Computational technologies impact on every human right, from the right to life to the right to privacy, freedom of expression to social and economic rights. So how can we defend human rights in a technological landscape increasingly shaped by robotics and artificial intelligence (AI)?

AI and human rights

First, there is a real fear that increased machine autonomy will undermine the status of humans. This fear is compounded by a lack of clarity over who will be held to account, whether in a legal or a moral sense, when intelligent machines do harm. But I’m not sure that the focus of our concern for human rights should really lie with rogue robots, as it seems to at present. Rather, we should worry about the human use of robots and artificial intelligence and their deployment in unjust and unequal political, military, economic and social contexts.

https://www.youtube.com/watch?v=gCcx85zbxz4

This worry is particularly pertinent with respect to lethal autonomous weapons systems (LAWS), often described as killer robots. As we move towards an AI arms race, human rights scholars and campaigners such as Christof Heyns, the former UN special rapporteur on extrajudicial, summary or arbitrary executions, fear that the use of LAWS will put autonomous robotic systems in charge of life and death decisions, with limited or no human control.

AI also revolutionises the link between warfare and surveillance practices. Groups such as the International Committee for Robot Arms Control (ICRAC) recently expressed their opposition to Google’s participation in Project Maven, a military program that uses machine learning to analyse drone surveillance footage, which can be used for extrajudicial killings. ICRAC appealed to Google to ensure that the data it collects on its users is never used for military purposes, joining protests by Google employees over the company’s involvement in the project. Google recently announced that it will not be renewing its contract.

In 2013, the extent of surveillance practices was highlighted by the Edward Snowden revelations. These taught us much about the threat to the right to privacy and the sharing of data between intelligence services, government agencies and private corporations. The recent controversy surrounding Cambridge Analytica’s harvesting of personal data via the use of social media platforms such as Facebook continues to cause serious apprehension, this time over manipulation and interference into democratic elections that damage the right to freedom of expression.

Meanwhile, critical data analysts challenge discriminatory practices associated with what they call AI’s “white guy problem”. This is the concern that AI systems trained on existing data replicate existing racial and gender stereotypes that perpetuate discriminatory practices in areas such as policing, judicial decisions or employment.

AI can replicate and entrench stereotypes. Ollyy/Shutterstock.com

Ambiguous bots

The potential threat of computational technologies to human rights and to physical, political and digital security was highlighted in a recently published study on The Malicious Use of Artificial Intelligence. The concerns expressed in this University of Cambridge report must be taken seriously. But how should we deal with these threats? Are human rights ready for the era of robotics and AI?

There are ongoing efforts to update existing human rights principles for this era. These include the UN Framing and Guiding Principles on Business and Human Rights, attempts to write a Magna Carta for the digital age and the Future of Life Institute’s Asilomar AI Principles, which identify guidelines for ethical research, adherence to values and a commitment to the longer-term beneficent development of AI.

These efforts are commendable but not sufficient. Governments and government agencies, political parties and private corporations, especially the leading tech companies, must commit to the ethical uses of AI. We also need effective and enforceable legislative control.

Whatever new measures we introduce, it is important to acknowledge that our lives are increasingly entangled with autonomous machines and intelligent systems. This entanglement enhances human well-being in areas such as medical research and treatment, in our transport system, in social care settings and in efforts to protect the environment.

But in other areas this entanglement throws up worrying prospects. Computational technologies are used to watch and track our actions and behaviours, trace our steps, our location, our health, our tastes and our friendships. These systems shape human behaviour and nudge us towards practices of self-surveillance that curtail our freedom and undermine the ideas and ideals of human rights.

And herein lies the crux: the capacity for dual use of computational technologies blurs the line between beneficent and malicious practices. What’s more, computational technologies are deeply implicated in the unequal power relationships between individual citizens, the state and its agencies, and private corporations. If unhinged from effective national and international systems of checks and balances, they pose a real and worrying threat to our human rights.

  • Birgit Schippers is Visiting Research Fellow, Senator George J Mitchell Institute for Global Peace, Security and Justice, Queen’s University Belfast
  • This article first appeared on The Conversation
]]>
BLADE RUNNER 2049 - Official Trailer nonadult
Should cybersecurity be a human right? https://stuff.co.za/2017/02/14/cybersecurity-human-right/ Tue, 14 Feb 2017 22:00:00 +0000 https://stuff.co.za2017/02/14/cybersecurity-human-right/ Having access to the internet is increasingly considered to be an emerging human right. International organizations and national governments have begun to formally recognize its importance to freedom of speech, expression and information exchange. The next step to help ensure some measure of cyber peace online may be for cybersecurity to be recognized as a human right, too.

The United Nations has taken note of the crucial role of internet connectivity in “the struggle for human rights.” United Nations officials have decried the actions of governments cutting off internet access as denying their citizens’ rights to free expression.

But access is not enough. Those of us who have regular internet access often suffer from cyber-fatigue: We’re all simultaneously expecting our data to be hacked at any moment and feeling powerless to prevent it. Late last year, the Electronic Frontier Foundation, an online rights advocacy group, called for technology companies to “unite in defense of users,” securing their systems against intrusion by hackers as well as government surveillance.

It’s time to rethink how we understand the cybersecurity of digital communications. One of the U.N.‘s leading champions of free expression, international law expert David Kaye, in 2015 called for “the encryption of private communications to be made a standard.” These and other developments in the international and business communities are signaling what could be early phases of declaring cybersecurity to be a human right that governments, companies and individuals should work to protect.

Is internet access a right?

The idea of internet access as a human right is not without controversy. No less an authority than Vinton Cerf, a “father of the internet,” has argued that technology itself is not a right, but a means through which rights can be exercised.

All the same, more and more nations have declared their citizens’ right to internet access. Spain, France, Finland, Costa Rica, Estonia and Greece have codified this right in a variety of ways, including in their constitutions, laws and judicial rulings.

A former head of the U.N.‘s global telecommunications governing body has argued that governments must “regard the internet as basic infrastructure – just like roads, waste and water.” Global public opinion seems to overwhelmingly agree.

Cerf’s argument may, in fact, strengthen the case for cybersecurity as a human right – ensuring that technology enables people to exercise their rights to privacy and free communication.

Existing human rights law

Current international human rights law includes many principles that apply to cybersecurity. For example, Article 19 of the Universal Declaration of Human Rights includes protections of freedom of speech, communication and access to information. Similarly, Article 3 states “Everyone has the right to life, liberty and security of person.” But enforcing these rights is difficult under international law. As a result, many countries ignore the rules.

There is cause for hope, though. As far back as 2011, the U.N.’s High Commission for Human Rights said that human rights are equally valid online as offline. Protecting people’s privacy is no less important when handling paper documents, for instance, than when dealing with digital correspondence. The U.N.’s Human Rights Council reinforced that stance in 2012, 2014 and 2016.

In 2013, the U.N. General Assembly itself – the organization’s overall governing body, comprising representatives from all member nations – voted to confirm people’s “right to privacy in the digital age.” Passed in the wake of revelations about U.S. electronic spying around the globe, the document further endorsed the importance of protecting privacy and freedom of expression online. And in November 2015, the G-20, a group of nations with some of the world’s largest economies, similarly endorsed privacy, “including in the context of digital communications.”

Putting protections in place

Simply put, the obligation to protect these rights involves developing new cybersecurity policies, such as encrypting all communications and discarding old and unneeded data, rather than keeping it around indefinitely. More firms are using the U.N.’s Guiding Principles to help inform their business decision-making to promote human rights due diligence. They are also using U.S. government recommendations, in the form of the National Institute for Standards and Technology Cybersecurity Framework, to help determine how best to protect their data and that of their customers.

In time, the tide will likely strengthen. Internet access will become more widely recognized as a human right – and following in its wake may well be cybersecurity. As people use online services more in their daily lives, their expectations of digital privacy and freedom of expression will lead them to demand better protections.

Governments will respond by building on the foundations of existing international law, formally extending into cyberspace the human rights to privacy, freedom of expression and improved economic well-being. Now is the time for businesses, governments and individuals to prepare for this development by incorporating cybersecurity as a fundamental ethical consideration in telecommunications, data storage, corporate social responsibility and enterprise risk management.

]]>
Internet freedom: why access is becoming a human right https://stuff.co.za/2016/06/05/internet-freedom-access-becoming-human-right/ Sun, 05 Jun 2016 22:00:00 +0000 https://stuff.co.za2016/06/05/internet-freedom-access-becoming-human-right/ When most people think or speak about internet freedom, they are often concerned with the right, for example, to say what you want online without censorship and without being subject to the chilling effects of surveillance.

These kind of freedoms are sometimes called “negative freedoms” or “freedoms from…”. They address the right not to be interfered with or obstructed in living your life. But there are also “positive freedoms” — “freedoms to…”

Some constitutions – notably the US Constitution – only protect negative rights. But South Africa’s includes both negative and positive rights. Positive rights include, for example, the socio-economic rights to food and shelter.

In its Internet Freedom Index Freedom House ranks South Africa as “free” alongside the UK, Argentina and Kenya. The ranking is largely because Freedom House weighs negative freedoms above positive ones. But how “free” is the internet in South Africa? For most, it is positive internet freedoms that may be more urgent.

Freedom is access

The South African Constitution in the Bill of Rights does not explicitly protect internet freedom but section 16(1) states that everyone has the right to “freedom to receive or impart information or ideas”. This is a right for everyone and it is not just a freedom from interference – a “freedom from” – but also a “freedom to”: a right to be able to reach others and be reached by others. In this it follows Article 19 of the Universal Declaration of Human Rights.

In his book Development as Freedom, Amartya Sen describes freedom as “our capability to lead the kind of lives we have reason to value”. In many ways, the internet is extending such capabilities.

More people now go online daily than read a newspaper. They are able to read a much greater variety of voices than are seen in print or on television. And public services are offering improved responsiveness on social media.

But we are also seeing a new development – instances where internet access is now a requirement. Examples include:

  • registering a company,
  • The Gauteng Education Department now requires parents with children entering primary or high school to apply online. Previously they could apply at the local school, and
  • The South African Broadcasting Corporation has announced that it will no longer advertise its jobs in newspapers, directing job seekers to its own website.

Indications from government are that we are likely to see more such initiatives. The result will be that South Africans’ ability to lead the kind of lives they value will become increasingly dependent on the physical, procedural, economic and social networks that we call “the internet”.

The question of cost

According to the All Media Products Survey (AMPS) of June 2015 fewer than half of South African adults had used the internet in the previous four weeks. More than half did not.

When we asked a representative sample of non-users in South Africa in 2012 why they hadn’t gone online, the main reason was that they had no device to connect with (87%). The second reason was that they didn’t know how to use it (76%) and the third was that it was too expensive (60%).

According to the survey, nine out of ten South Africans now use a mobile phone but only half of those now have access to smartphones. The most popular phone brand in South Africa is still Nokia. Most of the models in use have limited or no ability to connect to the net. And because only the better off have access to fixed lines at home or at work, the majority of South Africans, when they do get online, are dependent on mobile networks.

Mobile data is costly.

The International Telecommunications Union and the UN’s Educational, Scientific and Cultural Organisation have set a goal for affordable broadband internet access. It is that entry level broadband should not cost more than 5% of average monthly income. Because of a flawed methodology they state in a 2015 annual report that South Africa falls well within that target. But digging into the figures shows how unaffordable the internet is for most South Africans.

Statistics SA sets an upper bound poverty line of R779 per month per person (in 2011 prices). Most – about 53% – of the South African population live on income below this, according to the last census. So this poverty line is more or less the average income in the country. The poverty line adjusted for inflation to 2016 would be R1 031.

Taking the international 5% of income goal gives a maximum budget of about R52 per month. On three major networks (which account for more than 95% of all mobile customers) 500MB – the amount of data they set as a minimum – of data costs between R85 and R105. So for the average South African 500MB per month is unaffordable. In fact mobile data prices would have to fall by about half to be affordable.

And is 500MB per month enough? It is enough for a lot of instant messaging, or say about half an hour a day of browsing the web or using Facebook. But it is not enough to participate in otherwise free online courses such as Kahn Academy that often rely heavily on video.

This is affecting usage. The most popular online activity is instant messaging using applications like whatsapp. But only one in five people download music online.

Could mobile data be much cheaper in South Africa? Evidence suggests that the answer is yes. Research ICT Africa’s price index shows that South Africa’s data prices are over 20% more expensive than Nigeria, Uganda and Mozambique and three times as expensive as Kenya.

It is also worth noting that the poor in South Africa pay much more for data than the better-off. If you have a fixed line in your home you can buy pre-paid data bundles for R7 per GB or even less, a small fraction of what mobile network users pay.

Free internet?

We could go further and ask if the internet could and should not only be cheaper but free? In some places and for some people it already is. That includes university students thanks to a network for tertiary institutions funded by the government. It also includes many residents in the metropole of Tshwane – including townships – where there are over 600 wifi hotspots offering 500MB of data per day at fast speeds for free.

Just as South African municipalities give poor households a minimum amount of 600 litres of water and 50kwh of electricity for free, they could extend this model to the internet.

As lawyers sometimes say, the right to freedom of expression is an ‘enabling right’ —- a right that enables people to access or defend other rights. In the same way the internet itself is now an enabling technology that is increasingly required to participate in social, political and economic life.

For many or most South Africans whether or not the Films and Publications Board interferes with their right to view video material online does not affect ‘their capability to lead the lives they value’ because they cannot afford to access video or audio content online. At present, defending ‘negative’ internet rights is protecting the rights of the few. We need to move to demanding the ‘positive right’ of affordable access if we want internet freedom for all.

  • Indra de Lanerolle is Visiting Researcher, Network Society Lab, Journalism and Media Programme, University of the Witwatersrand
  • This article first appeared on The Conversation
]]>