Stuff South Africa https://stuff.co.za South Africa's Technology News Hub Tue, 19 Mar 2024 07:49:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 Stuff South Africa South Africa's Technology News Hub clean The digital tightrope walk for business and human rights https://stuff.co.za/2024/03/19/digital-tightrope-business-human-rights/ https://stuff.co.za/2024/03/19/digital-tightrope-business-human-rights/#respond Tue, 19 Mar 2024 07:49:36 +0000 https://stuff.co.za/?p=190922 Imagine a future where your access to justice depends on an algorithm, your freedom of expression is filtered through AI, and your personal data becomes a commodity traded without your consent. This is not a dystopian fantasy but a reality we are inching closer to as artificial intelligence (AI) becomes deeply integrated into our daily lives.

In an era where technology intertwines with daily life, AI emerges as a double-edged sword, cutting through the fabric of society with both promise and peril. As AI reshapes industries, it also casts a long shadow over fundamental human rights and ethical business practices. Consider the tale of a facial recognition system inaccurately flagging an innocent individual as a criminal suspect – and worse still, flagging individuals based on racial biases. Such instances underscore the urgent need for vigilance and responsibility in the age of AI.

The AI revolution and the rule of law

AI technologies are reshaping the legal landscape, introducing novel forms of digital evidence and altering traditional concepts of the rule of law. Courts worldwide grapple with the admissibility of AI-generated evidence, while law enforcement agencies increasingly rely on facial recognition and predictive policing tools, raising profound concerns about fairness, transparency, and accountability. The erosion of legal protections and standards in the face of AI’s opaque algorithms threatens the very foundation of justice, emphasising the need for regulatory frameworks that keep pace with technological advances.

The transformative power of AI in the legal domain is both fascinating and alarming. With the increasing spread of fake news, elections can be marred by misinformation, disinformation, and hate speech. AI advances can be key in orchestrating verification campaigns, as a pilot project conducted by the United Nations Development Programme in Zambia’s 2021 elections showed. In the United States, the use of AI in predictive policing and sentencing algorithms has sparked debate over fairness and bias. Studies, such as the 2016 ProPublica report, have highlighted how algorithms can inherit and amplify racial biases, challenging the very notion of impartial justice.

These issues underscore the necessity for legal systems worldwide to adapt and ensure AI technologies uphold the highest standards of equity, accuracy and transparency.

Intersectionality of AI and human rights

The impact of AI on human rights is far-reaching, affecting everything from freedom of expression to the right to privacy. For instance, social media algorithms can amplify or suppress certain viewpoints, while automated decision-making systems can deny individuals access to essential services based on biased data. Automated content moderation systems on social media platforms can also inadvertently silence marginalised voices, impacting freedom of speech. The deployment of mass surveillance technologies in countries like China similarly raises severe privacy concerns, illustrating the global need for AI governance that respects and protects individual rights.

These examples highlight the critical need for AI systems that are designed and deployed with a deep understanding of their human rights implications. Ensuring that AI technologies respect and promote human rights requires a concerted effort from developers, policymakers, and civil society.

Closer to home, the issue of digital and socioeconomic divides further complicates the intersectionality of AI and human rights. AI-driven solutions in healthcare and agriculture, for example, have shown immense potential to bridge socio-economic gaps. The balance between leveraging AI for societal benefits whilst protecting individual rights is a delicate one, necessitating nuanced governance frameworks.

Whilst these frameworks are still nascent in many jurisdictions around the world, the United Nations has prioritised efforts to secure the promotion, protection and enjoyment of human rights on the Internet. In 2021, the United Nations Human Rights Council adopted the UN resolution on the promotion, protection and enjoyment of human rights on the Internet, which resolution was heralded as a milestone and recognises that all of the rights people have offline must also be protected online.

This resolution came off the back of other UN resolutions, specifically condemning any measure to prevent or disrupt access to the internet and recognising the importance of access to information and privacy online for the realisation of the right to freedom of expression and to hold opinions without interference.

In 2023, the United Nations High Commissioner for Human Rights, Volker Türk, said the digital world was still in its early days. Around the world, more children and young people than ever before are online, either at home or at school, but depending on birthplace, not everyone has this chance.

The digital divide means a staggering 2.2 billion children and young people under 25 around the globe still do not have access to the Internet at home. They are being left behind, unable to access education and training, or news and information that could help protect their health, safety and rights. There is also a gap between girls and boys in terms of access to the Internet. He concluded by saying “It may be time to reinforce universal access to the Internet as a human right, and not just a privilege”.

Corporate responsibility in the AI era

For corporations in South Africa, Africa, and globally, AI introduces new risk areas that must be navigated with caution and responsibility. General Counsel, the world over, are required to investigate and implement strategies around issues of privacy, data protection, and non-discrimination which are paramount, as the misuse of AI can lead to significant reputational damage and legal liabilities. Corporations must adopt ethical AI frameworks and corporate social responsibility initiatives that prioritise human rights, demonstrating a commitment to responsible business practices in the digital age.

Corporations stand at the frontline of the AI revolution, bearing the responsibility to wield this powerful tool ethically. Google’s Project Maven, a collaboration with the Pentagon to enhance drone targeting through AI, faced internal and public backlash, leading to the establishment of AI ethics principles by the company. This example demonstrates the importance of corporate accountability and the potential repercussions of neglecting ethical considerations in AI deployment. It also highlights that influential corporations hold a significant level of leverage in their environments. This leverage should be used to progress respect for human rights across the value chain.

The challenge of regulation

Regulating AI presents a formidable challenge, particularly in Africa, where socio-economic and resource constraints are significant. The rapid pace of AI development often outstrips the ability of regulatory frameworks to adapt, leaving gaps that can be exploited to the detriment of society. Moreover, regulatory developments in the Global North often set precedents that may not be suitable for the African context, highlighting the need for regulations that are inclusive, contextually relevant, and capable of protecting citizens’ rights while fostering innovation.

The fast-paced evolution of AI technology poses a significant challenge to regulators, especially in the African context, where resources and expertise in technology governance are often limited. The European Union’s General Data Protection Regulation (GDPR) serves as a pioneering model for embedding principles of privacy and data protection in technology use, offering valuable lessons for African nations in crafting their regulatory responses to AI.

Towards a sustainable future

The path towards a sustainable future, where AI benefits humanity while safeguarding human rights, requires collaboration among businesses, regulators, and civil society. Stakeholders must work together to develop and implement guidelines and standards that ensure AI technologies are used ethically and responsibly. Highlighting examples of responsible AI use, such as initiatives that provide equitable access to technology or projects that leverage AI for social good, can inspire others to follow suit.

Collaboration is key to harnessing AI’s potential while safeguarding human rights and ethical standards. Initiatives like the Partnership on AI, which brings together tech giants, non-profits, and academics to study and formulate best practices on AI technologies, exemplify how collective action can lead to responsible AI development and use.

As AI and related technologies continue to transform our world, we must not lose sight of the human values that define us. The intersection of AI, business, and human rights presents complex challenges but also opportunities for positive change, not only for governments but for corporations too. By fostering ongoing dialogue and cooperation among all stakeholders, we can shape a future where technology serves humanity’s best interests, ensuring that the digital age is marked by innovation, equity, and respect for human rights. Corporate governance frameworks will need to adapt in response to these advances.

As Africa navigates the complexities of AI integration, the journey must be undertaken, byte by byte, with a steadfast commitment to ethical principles and human rights. The continent’s diverse tapestry of cultures and histories offers unique insights into responsible AI governance. By prioritising transparency, accountability, and inclusivity, African governments and corporations can lead the way in demonstrating how technology, guided by human values, can be a powerful tool for positive change. In the digital age, the fusion of innovation and ethics will define Africa’s trajectory, ensuring that AI becomes a catalyst for empowerment rather than a source of division.


Authors:

  • Pooja Dela-Cron is a Partner at Webber Wentzel
  • Paula-Ann Novotny is a Senior Associate at Webber Wentzel
]]>
https://stuff.co.za/2024/03/19/digital-tightrope-business-human-rights/feed/ 0
Something felt ‘off’ – how AI messed with human research, and what we learned https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/ https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/#respond Mon, 18 Mar 2024 07:10:19 +0000 https://stuff.co.za/?p=190880 All levels of research are being changed by the rise of artificial intelligence (AI). Don’t have time to read that journal article? AI-powered tools such as TLDRthis will summarise it for you.

Struggling to find relevant sources for your review? Inciteful will list suitable articles with just the click of a button. Are your human research participants too expensive or complicated to manage? Not a problem – try synthetic participants instead.

Each of these tools suggests AI could be superior to humans in outlining and explaining concepts or ideas. But can humans be replaced when it comes to qualitative research?

This is something we recently had to grapple with while carrying out unrelated research into mobile dating during the COVID-19 pandemic. And what we found should temper enthusiasm for artificial responses over the words of human participants.

Encountering AI in our research

Our research is looking at how people might navigate mobile dating during the pandemic in Aotearoa New Zealand. Our aim was to explore broader social responses to mobile dating as the pandemic progressed and as public health mandates changed over time.

As part of this ongoing research, we prompt participants to develop stories in response to hypothetical scenarios.

In 2021 and 2022 we received a wide range of intriguing and quirky responses from 110 New Zealanders recruited through Facebook. Each participant received a gift voucher for their time.

Participants described characters navigating the challenges of “Zoom dates” and clashing over vaccination statuses or wearing masks. Others wrote passionate love stories with eyebrow-raising details. Some even broke the fourth wall and wrote directly to us, complaining about the mandatory word length of their stories or the quality of our prompts.

A human-generated story about dating during the pandemic.

These responses captured the highs and lows of online dating, the boredom and loneliness of lockdown, and the thrills and despair of finding love during the time of COVID-19.

But, perhaps most of all, these responses reminded us of the idiosyncratic and irreverent aspects of human participation in research – the unexpected directions participants go in, or even the unsolicited feedback you can receive when doing research.

But in the latest round of our study in late 2023, something had clearly changed across the 60 stories we received.

This time many of the stories felt “off”. Word choices were quite stilted or overly formal. And each story was quite moralistic in terms of what one “should” do in a situation.

Using AI detection tools, such as ZeroGPT, we concluded participants – or even bots – were using AI to generate story answers for them, possibly to receive the gift voucher for minimal effort.

Moralistic and stilted: an AI-generated story about dating during the pandemic.

Contrary to claims that AI can sufficiently replicate human participants in research, we found AI-generated stories to be woeful.

We were reminded that an essential ingredient of any social research is for the data to be based on lived experience.

Is AI the problem?

Perhap the biggest threat to human research is not AI, but rather the philosophy that underscores it.

It is worth noting the majority of claims about AI’s capabilities to replace humans come from computer scientists or quantitative social scientists. In these types of studies, human reasoning or behaviour is often measured through scorecards or yes/no statements.

This approach necessarily fits human experience into a framework that can be more easily analysed through computational or artificial interpretation.

In contrast, we are qualitative researchers who are interested in the messy, emotional, lived experience of people’s perspectives on dating. We were drawn to the thrills and disappointments participants originally pointed to with online dating, the frustrations and challenges of trying to use dating apps, as well as the opportunities they might create for intimacy during a time of lockdowns and evolving health mandates.


Read More: Emotion-tracking AI on the job: Workers fear being watched – and misunderstood


In general, we found AI poorly simulated these experiences.

Some might accept generative AI is here to stay, or that AI should be viewed as offering various tools to researchers. Other researchers might retreat to forms of data collection, such as surveys, that might minimise the interference of unwanted AI participation.

But, based on our recent research experience, we believe theoretically-driven, qualitative social research is best equipped to detect and protect against AI interference.

There are additional implications for research. The threat of AI as an unwanted participant means researchers will have to work longer or harder to spot imposter participants.

Academic institutions need to start developing policies and practices to reduce the burden on individual researchers trying to carry out research in the changing AI environment.

Regardless of researchers’ theoretical orientation, how we work to limit the involvement of AI is a question for anyone interested in understanding human perspectives or experiences. If anything, the limitations of AI reemphasise the importance of being human in social research.


  • Alexandra Gibson is a Senior Lecturer in Health Psychology, Te Herenga Waka — Victoria University of Wellington
  • Alex Beattie is a Research Fellow, School of Health, Te Herenga Waka — Victoria University of Wellington
  • This article first appeared in The Conversation

]]>
https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/feed/ 0
Google’s Gemini showcases more powerful technology, but we’re still not close to superhuman AI https://stuff.co.za/2024/03/15/google-gemini-showcases-powerful-technology/ https://stuff.co.za/2024/03/15/google-gemini-showcases-powerful-technology/#respond Fri, 15 Mar 2024 07:16:26 +0000 https://stuff.co.za/?p=190826 In December 2023, Google announced the launch of its new large language model (LLM) named Gemini. Gemini now provides the artificial intelligence (AI) foundations of Google products; it is also a direct rival to OpenAI’s GPT-4.

But why is Google considering Gemini as such an important milestone, and what does this mean for users of Google’s services? And generally speaking, what does it mean in the context of the current hyperfast-paced developments of AI?

AI everywhere

Google is betting on Gemini to transform most of its products by enhancing current functionalities and creating new ones for services such as search, Gmail, YouTube and its office productivity suite. This would also allow improvements to their online advertising business — their main source of revenue — as well as for Android phone software, with trimmed versions of Gemini running on limited capacity hardware.

For users, Gemini means new features and improved capacities that would make Google services harder to shun, strengthening an already dominant position in areas such as search engines. The potential and opportunities for Google are considerable, given the bulk of their software is easily upgradable cloud services.

But the huge and unexpected success of ChatGPT attracted a lot of attention and enhanced the credibility of OpenAI. Gemini will allow Google to reinstate itself as a major player in AI in the public view. Google is a powerhouse in AI, with large and strong research teams at the origin of many major advances of the last decade.

There is public discussion about these new technologies, both on the benefits they provide and the disruption they create in fields such as education, design and health care.

Strengthening AI

At its core, Gemini relies on transformer networks. Originally devised by a research team at Google, the same technology is used to power other LLMs such as GPT-4.

A distinctive element of Gemini is its capacity to deal with different data modalities: text, audio, image and video. This provides the AI model with the capacity to execute tasks over several modalities, like answering questions regarding the content of an image or conducting a keyword search on specific types of content discussed in podcasts.

But more importantly, that the models can handle distinct modalities enables the training of globally superior AI models, compared to distinct models trained independently for each modality. Indeed, such multimodal models are deemed to be stronger since they are exposed to different perspectives of the same concepts.

For example, the concept of birds may be better understood through learning from a mix of birds’ textual descriptions, vocalizations, images and videos. This idea of multimodal transformer models has been explored in previous research at Google, Gemini being the first full-fledged commercial implementation of the approach.

Such a model is seen as a step in the direction of stronger generalist AI models, also known as artificial general intelligence (AGI).

Risks of AGI

Given the rate at which AI is advancing, the expectations that AGI with superhuman capabilities will be designed in the near future generates discussions in the research community and more broadly in the society.

On one hand, some anticipate the risk of catastrophic events if a powerful AGI falls into the hands of ill-intentioned groups, and request that developments be slowed down.

Others claim that we are still very far from such actionable AGI, that the current approaches allow for a shallow modelling of intelligence, mimicking the data on which they are trained, and lack an effective world model — a detailed understanding of actual reality — required to achieve human-level intelligence.

On the other hand, one could argue that focusing the conversation on existential risk is distracting attention from more immediate impacts brought on by recent advances of AI, including perpetuating biases, producing incorrect and misleading content — prompting Google to pause its Gemini image generatorincreasing environmental impacts and enforcing the dominance of Big Tech.


Read More: Google Gemini replaces Bard as catch-all AI platform


The line to follow lies somewhere in between all of these considerations. We are still far from the advent of actionable AGI — additional breakthroughs are required, including introducing stronger capacities for symbolic modelling and reasoning.

In the meantime, we should not be distracted from the important ethical and societal impacts of modern AI. These considerations are important and should be addressed by people with diverse expertise, spanning technological and social science backgrounds.

Nevertheless, although this is not a short-term threat, achieving AI with superhuman capacity is a matter of concern. It is important that we, collectively, become ready to responsibly manage the emergence of AGI when this significant milestone is reached.


]]>
https://stuff.co.za/2024/03/15/google-gemini-showcases-powerful-technology/feed/ 0 The capabilities of multimodal AI | Gemini Demo nonadult
Honor Magic V2 and Magic 6 Pro flagships arrive in SA next week https://stuff.co.za/2024/03/13/honor-magic-v2-magic-6-pro-priced/ https://stuff.co.za/2024/03/13/honor-magic-v2-magic-6-pro-priced/#respond Wed, 13 Mar 2024 10:44:46 +0000 https://stuff.co.za/?p=190754 Choosing a smartphone is about to get a little more difficult with Honor launching its Magic V2 and Magic 6 Pro in South Africa next week.

While the Magic V2 isn’t exactly new — it launched in China in July last year — it could still prove disruptive to the local foldable smartphone market. When it lands, it’ll claim the title of being the thinnest and lightest folding smartphone in the country against Samsung’s Galaxy Z Fold 5 and Huawei’s Mate X3.

The Magic 6 Pro, on the other hand, is entirely new and was only recently announced at MWC last month. Honor has Samsung’s Galaxy S24 Ultra in its sights as the only other phone available in the country (so far) with on-device AI capabilities. Both devices also feature pretty respectable spec sheets.

Honor brings fresh Magic V2 to SA

Being slightly older, the Magic V2 sports last year’s flagship Snapdragon 8 Gen 2 5G chipset with 16GB of RAM and 512GB of storage. It might not be new but that chipset still offers impressive performance. Although, we’re eager to see how Honor handles the heat in the Magic V2’s remarkably thin and light chassis — we’re talking 156.7 x 145.4 x 4.7 mm unfolded and 156.7 x 74.1 x 9.9 mm folded while only weighing 231g.

The folding internal OLED display measures 7.92in with a 2,156 x 2,344 resolution. It uses LTPO (Low-temperature polycrystalline oxide) tech meaning it can vary its refresh rate, reducing it to save battery and increasing it up to 120Hz for buttery smooth scrolling or gaming. The Magic V2 also features a 6.43in LTPO OLED cover screen with a 1,060 x 2,376 resolution and HDR10+ support.

The Magic 6 Pro packs an equally impressive 6.8in LTPO OLED screen with a reported max brightness of 5,000 nits — that’s only just shy of the Sun’s 1.6 billion nits but certainly the brightness display you can get here. Behind the panel sits the latest Snapdragon 8 Gen 3 chipset along with 12GB of RAM and 512GB storage options.

Honor’s marketing is making a big fuss about the Magic 6 Pro’s camera performance. It houses two 50MP sensors — one of which will use a variable aperture — along with a 180MP sensor behind a periscope telephoto lens. Up front, you’ll find another 50MP selfie cam and a TOF (time-of-flight) sensor for depth and biometrics.

The Magic V2 isn’t as focused on snapping pics but that doesn’t mean it won’t be capable. It also houses two 50MP sensors along with a 20MP telephoto shooter around back and a 16MP selfie cam.

When it comes to portable power, the Magic V2 uses Silicon-Carbon (Si-Ca) in its 5,00mAh battery, instead of the traditional Lithium-Iron Phosphate (Li-Po) used in most modern smartphones, and uses Honor’s 66W SuperCharge tech for refilling. The Magic 6 Pro packs a 5,600mAh battery which is based on the same Silicon-Carbon tech in the Magic V2 and will support charging at 100W wired and 66W wireless.

The Magic V2 and Magic 6 Pro will officially launch in SA next week. We’ll need to wait until then to find out when the devices will be available for purchase, although we don’t think it will be too long after the launch.

]]>
https://stuff.co.za/2024/03/13/honor-magic-v2-magic-6-pro-priced/feed/ 0
Emotion-tracking AI on the job: Workers fear being watched – and misunderstood https://stuff.co.za/2024/03/13/emotion-tracking-ai-on-the-job-workers/ https://stuff.co.za/2024/03/13/emotion-tracking-ai-on-the-job-workers/#respond Wed, 13 Mar 2024 07:36:50 +0000 https://stuff.co.za/?p=190741 Emotion artificial intelligence (AI) uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. It is used in contexts both mundane, like entertainment, and high stakes, like the workplace, hiring and health care.

A wide range of industries already use emotional AI, including call centres, finance, banking, nursing and caregiving. Over 50% of large employers in the U.S. use emotional AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centres monitor what their operators say and their tone of voice.

Scholars have raised concerns about emotion AI’s scientific validity and its reliance on contested theories about emotion. They have also highlighted emotion AI’s potential for invading privacy and exhibiting racialgender and disability bias.

Some employers use the technology as though it were flawless, while some scholars seek to reduce its bias and improve its validitydiscredit it altogether or suggest banning emotional AI, at least until more is known about its implications.

I study the social implications of technology. I believe that it is crucial to examine emotion AI’s implications for people subjected to it, such as workers – especially those marginalized by their race, gender or disability status.

Workers’ concerns

To understand where emotion AI used in the workplace is going, my colleague Karen Boyd and I set out to examine inventors’ conceptions of emotion AI in the workplace. We analyzed patent applications that proposed emotion AI technologies for the workplace. Purported benefits claimed by patent applicants included assessing and supporting employee well-being, ensuring workplace safety, increasing productivity and aiding in decision-making, such as making promotions, firing employees and assigning tasks.

We wondered what workers think about these technologies. Would they also perceive these benefits? For example, would workers find it beneficial for employers to provide well-being support to them?

My collaborators Shanley CorviteKat RoemmichTillie Ilana Rosenberg and I conducted a survey partly representative of the U.S. population and partly oversampled for people of colour, trans and nonbinary people and people living with mental illness. These groups may be more likely to experience harm from emotion AI. Our study had 289 participants from the representative sample and 106 participants from the oversample. We found that 32% of respondents reported experiencing or expecting no benefit to them from emotion AI use, whether current or anticipated, in their workplace.

While some workers noted potential benefits of emotion AI use in the workplace like increased well-being support and workplace safety, mirroring benefits claimed in patent applications, all also expressed concerns. They were concerned about harm to their well-being and privacy, harm to their work performance and employment status, and bias and mental health stigma against them.

For example, 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions.

Participants’ voices

One participant who had multiple health conditions said: “The awareness that I am being analyzed would ironically have a negative effect on my mental health.” This means that despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. Indeed, other work by my colleagues Roemmich, Florian Schaub and I suggests that emotion AI-induced privacy loss can span a range of privacy harms, including psychological, autonomy, economic, relationship, physical and discrimination.

On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.”

Participants in the study also mentioned the potential for exacerbated power imbalances and said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace, pointing to how emotion AI use could potentially intensify already existing tensions in the employer-worker relationship. For instance, a respondent said: “The amount of control that employers already have over employees suggests there would be few checks on how this information would be used. Any ‘consent’ [by] employees is largely illusory in this context.”

Lastly, participants noted potential harms, such as emotion AI’s technical inaccuracies potentially creating false impressions about workers, and emotion AI creating and perpetuating bias and stigma against workers. In describing these concerns, participants highlighted their fear of employers relying on inaccurate and biased emotion AI systems, particularly against people of colour, women and trans individuals.

For example, one participant said: “Who is deciding what expressions ‘look violent,’ and how can one determine people as a threat just from the look on their face? A system can read faces, sure, but not minds. I just cannot see how this could actually be anything but destructive to minorities in the workplace.”

Participants noted that they would either refuse to work at a place that uses emotion AI – an option not available to many – or engage in behaviours to make emotion AI read them favourably to protect their privacy. One participant said: “I would exert a massive amount of energy masking even when alone in my office, which would make me very distracted and unproductive,” pointing to how emotion AI use would impose additional emotional labour on workers.

Worth the harm?

These findings indicate that emotion AI exacerbates existing challenges experienced by workers in the workplace, despite proponents claiming emotion AI helps solve these problems.

If emotion AI does work as claimed and measures what it claims to measure, and even if issues with bias are addressed in the future, there are still harms experienced by workers, such as the additional emotional labour and loss of privacy.


Read More: Demand for computer chips fuelled by AI could reshape global politics and security


If these technologies do not measure what they claim or they are biased, then people are at the mercy of algorithms deemed to be valid and reliable when they are not. Workers would still need to expend the effort to try to reduce the chances of being misread by the algorithm or to engage in emotional displays that would read favourably to the algorithm.

Either way, these systems function as panopticon-like technologies, creating privacy harms and feelings of being watched.


]]>
https://stuff.co.za/2024/03/13/emotion-tracking-ai-on-the-job-workers/feed/ 0 Can AI Detect Your Emotions? nonadult
Microsoft censors AI prompts in Copilot after AI engineer speaks out https://stuff.co.za/2024/03/11/microsoft-censors-copilot-ai-prompts/ https://stuff.co.za/2024/03/11/microsoft-censors-copilot-ai-prompts/#respond Mon, 11 Mar 2024 10:11:41 +0000 https://stuff.co.za/?p=190640 Microsoft has implemented changes to the guardrails that govern prompts in Copilot after one of the company’s AI engineers wrote to the Federal Trade Commission (FTC) last week regarding concerns they had with the platform’s image generation abilities.

Some of the now-blocked prompts include “pro choice,” “four twenty,” and “pro life” after the platform was found to produce “demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use,” according to a CNBC report.

Stuff can confirm that when provided with those prompts Copilot Designer shows a message saying it couldn’t generate images because “something may have triggered Microsoft’s Responsible AI guidelines.”

Microsoft’s Designer gets slap on AI wrist

A Microsoft spokesperson told CNBC about the changes, “We are continuously monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system.”

As reassuring as Microsoft probably wants that to seem, the fact that you can still generate questionable images or easily get Copilot Designer to infringe on copyrights after the changes isn’t doing the company any favours.


Read More: Microsoft Copilot celebrates 1st birthday with redesign on web and mobile


Neither is the fact that Shane Jones, the Microsoft engineer who wrote to the FTC, first tried reporting his findings internally back in December 2023. Microsoft acknowledged his concerns but that’s about as far as it went, instead referring him to OpenAI. After not hearing back from them, Jones posted an open letter to LinkedIn asking OpenAI’s board to suspend Dall-E 3, the AI model Copilot Designer is based on, until the issues could be resolved.

Microsoft’s lawyers didn’t like that and told Jones to remove his LinkedIn post, which he did. This is what prompted him to write letters to FTC chairperson Lina Khan and Microsoft’s board of directors, letters he shared with CNBC.

Not a particularly good look for Microsoft.

Source

]]>
https://stuff.co.za/2024/03/11/microsoft-censors-copilot-ai-prompts/feed/ 0
Demand for computer chips fuelled by AI could reshape global politics and security https://stuff.co.za/2024/03/08/demand-for-computer-chips-fuelled-by-ai/ https://stuff.co.za/2024/03/08/demand-for-computer-chips-fuelled-by-ai/#respond Fri, 08 Mar 2024 07:17:19 +0000 https://stuff.co.za/?p=190573 A global race to build powerful computer chips that are essential for the next generation of artificial intelligence (AI) tools could have a major impact on global politics and security.

The US is currently leading the race in the design of these chips, also known as semiconductors. But most of the manufacturing is carried out in Taiwan. The debate has been fuelled by the call by Sam Altman, CEO of ChatGPT’s developer OpenAI, for a US$5 trillion to US$7 trillion (£3.9 trillion to £5.5 trillion) global investment to produce more powerful chips for the next generation of AI platforms.

The amount of money Altman called for is more than the chip industry has spent in total since it began. Whatever the facts about those numbers, overall projections for the AI market are mind blowing. The data analytics company GlobalData forecasts that the market will be worth US$909 billion by 2030.

Unsurprisingly, over the past two years, the US, China, Japan and several European countries have increased their budget allocations and put in place measures to secure or maintain a share of the chip industry for themselves. China is catching up fast and is subsidising chips, including next-generation ones for AI, by hundreds of billions over the next decade to build a manufacturing supply chain.

Subsidies seem to be the preferred strategy for Germany too. The UK government has announced its plans to invest £100 million to support regulators and universities in addressing challenges around artificial intelligence.

The economic historian Chris Miller, the author of the book Chip War, has talked about how powerful chips have become a “strategic commodity” on the global geopolitical stage.

Despite the efforts by several countries to invest in the future of chips, there is currently a shortage of the types currently needed for AI systems. Miller recently explained that 90% of the chips used to train, or improve, AI systems are produced by just one company.

That company is the Taiwan Semiconductor Manufacturing Company (TSMC). Taiwan’s dominance in the chip manufacturing industry is notable because the island is also the focus for tensions between China and the US.


Read more: The microchip industry would implode if China invaded Taiwan, and it would affect everyone


Taiwan has, for the most part, been independent since the middle of the 20th century. However, Beijing believes it should be reunited with the rest of China and US legislation requires Washington to help defend Taiwan if it is invaded. What would happen to the chip industry under such a scenario is unclear, but it is obviously a focus for global concern.

The disruption of supply chains in chip manufacturing have the potential to bring entire industries to a halt. Access to the raw materials, such as rare earth metals, used in computer chips has also proven to be an important bottleneck. For example, China controls 60% of the production of gallium metal and 80% of the global production of germanium. These are both critical raw products used in chip manufacturing.

And there are other, lesser-known bottlenecks. A process called extreme ultraviolet (EUV) lithography is vital for the ability to continue making computer chips smaller and smaller – and therefore more powerful. A single company in the Netherlands, ASML, is the only manufacturer of EUV systems for chip production.

However, chip factories are increasingly being built outside Asia again – something that has the potential to reduce over-reliance on a few supply chains. Plants in the US are being subsidised to the tune of US$43 billion and in Europe, US$53 billion.

For example, the Taiwanese semiconductor manufacturer TSMC is planning to build a multibillion dollar facility in Arizona. When it opens, that factory will not be producing the most advanced chips that it’s possible to currently make, many of which are still produced by Taiwan.

Moving chip production outside Taiwan could reduce the risk to global supplies in the event that manufacturing were somehow disrupted. But this process could take years to have a meaningful impact. It’s perhaps not surprising that, for the first time, this year’s Munich Security Conference created a chapter devoted to technology as a global security issue, with the discussion of the role of computer chips.

Wider issues

Of course, the demand for chips to fuel AI’s growth is not the only way that artificial intelligence will make major impact on geopolitics and global security. The growth of disinformation and misinformation online has transformed politics in recent years by inflating prejudices on both sides of debates.

We have seen it during the Brexit campaign, during US presidential elections and, more recently, during the conflict in Gaza. AI could be the ultimate amplifier of disinformation. Take, for example, deepfakes – AI-manipulated videos, audio or images of public figures. These could easily fool people into thinking a major political candidate had said something they didn’t.

As a sign of this technology’s growing importance, at the 2024 Munich Security Conference, 20 of the world’s largest tech companies launched something called the “Tech Accord”. In it, they pledged to cooperate to create tools to spot, label and debunk deepfakes.


Read More: What is a GPU? An expert explains the chips powering the AI boom, and why they’re worth trillions


But should such important issues be left to tech companies to police? Mechanisms such as the EU’s Digital Service Act, the UK’s Online Safety Bill as well as frameworks to regulate AI itself should help. But it remains to be seen what impact they can have on the issue.

The issues raised by the chip industry and the growing demand driven by AI’s growth are just one way that AI is driving change on the global stage. But it remains a vitally important one. National leaders and authorities must not underestimate the influence of AI. Its potential to redefine geopolitics and global security could exceed our ability to both predict and plan for the changes.


  • Kirk Chang is a Professor of Management and Technology, University of East London
  • Alina Vaduva is a Director of the Business Advice Centre for Post Graduate Students at UEL, Ambassador of the Centre for Innovation, Management and Enterprise, University of East London
  • This article first appeared in The Conversation

]]>
https://stuff.co.za/2024/03/08/demand-for-computer-chips-fuelled-by-ai/feed/ 0
Google is tidying up the “spammy, low-quality content” on Search https://stuff.co.za/2024/03/06/google-tidying-spammy-low-quality-search/ https://stuff.co.za/2024/03/06/google-tidying-spammy-low-quality-search/#respond Wed, 06 Mar 2024 09:23:36 +0000 https://stuff.co.za/?p=190483 Is it just us, or has Google Search been slacking lately? Its usefulness is waning and we think it may have something to do with this AI-ridden era of the internet. “…Spammy, low-quality content,” as Google calls it, is plugging up Search and taking the spotlight off the ‘useful’ results. Google wants to do something about it. The search giant just announced “key changes” to “improve the quality of Search and the helpfulness of your results.”

Room for refining

Google Search changes intext1 (Google)

One of the ways it’ll be doing so is by “refining some of [its] core ranking systems” to get a better sense of when web pages feature poor user experiences, downright unhelpful, or “feel like they were created for search engines instead of people.” The big idea here is for Search to sift through the nonsense, bringing the most helpful information to the surface, simultaneously burying unoriginal and unhelpful content.

It’s specifically looking to clear out those results designed to game the SEO (search engine optimisation) at scale — especially where automation might be involved. “This could include sites created primarily to match very specific search queries,” it said.

“We believe these updates will reduce the amount of low-quality content on Search and send more traffic to helpful and high-quality sites. Based on our evaluations, we expect that the combination of this update and our previous efforts will collectively reduce low-quality, unoriginal content in search results by 40%,” the king of Search said.

Google’s announcement may not mention generative AI specifically, but it is a concern that’s being addressed, according to a Google spokesperson speaking with Gizmodo. The changes target “low-quality AI-generated content that’s designed to attract clicks, but that doesn’t add much original value.”

Google reckons it’s dealing with a “more complex” update than usual and changes could take up to a month to begin rolling out.


Read More: Google recognises South Africa as it launches its first Cloud region in Joburg


Spammers Paradise no more

Another change tackles spam, with more content being considered worthy of being on that list. It’s updating its spam policies to “better address new and evolving abusive practices that lead to unoriginal, low-quality content showing up on Search,” starting today.

“Today, scaled content creation methods are more sophisticated, and whether content is created purely through automation isn’t always as clear,” it said. “…we’re strengthening our policy to focus on this abusive behavior — producing content at scale to boost search ranking — whether automation, humans or a combination are involved. This will allow us to take action on more types of content with little to no value created at scale, like pages that pretend to have answers to popular searches but fail to deliver helpful content.”

Part of those changes involves stemming the flow of low-quality third-party intent on “capitalizing on the hosting site’s strong reputation” that might usually contain “great content.” Google mentions how a third-party producer might publish a payday loan review article on a trusted education website to “gain ranking benefits from the site.”

Starting 5 May, Google will consider this sort of result ‘spam’ and it’s giving affected sites time to make changes.

]]>
https://stuff.co.za/2024/03/06/google-tidying-spammy-low-quality-search/feed/ 0
What is a GPU? An expert explains the chips powering the AI boom, and why they’re worth trillions https://stuff.co.za/2024/03/06/what-is-a-gpu-and-how-powering-the-ai-boom/ https://stuff.co.za/2024/03/06/what-is-a-gpu-and-how-powering-the-ai-boom/#respond Wed, 06 Mar 2024 07:00:44 +0000 https://stuff.co.za/?p=190473 As the world rushes to make use of the latest wave of AI technologies, one piece of high-tech hardware has become a surprisingly hot commodity: the graphics processing unit, or GPU.

A top-of-the-line GPU can sell for tens of thousands of dollars, and leading manufacturer NVIDIA has seen its market valuation soar past US$2 trillion as demand for its products surges.

GPUs aren’t just high-end AI products, either. There are less powerful GPUs in phones, laptops and gaming consoles, too.

By now you’re probably wondering: what is a GPU, really? And what makes them so special?

What is a GPU?

GPUs were originally designed primarily to quickly generate and display complex 3D scenes and objects, such as those involved in video games and computer-aided design software. Modern GPUs also handle tasks such as decompressing video streams.

The “brain” of most computers is a chip called a central processing unit (CPU). CPUs can be used to generate graphical scenes and decompress videos, but they are typically far slower and less efficient on these tasks compared to GPUs. CPUs are better suited for general computation tasks, such as word processing and browsing web pages.

How are GPUs different from CPUs?

A typical modern CPU is made up of between 8 and 16 “cores”, each of which can process complex tasks in a sequential manner.

GPUs, on the other hand, have thousands of relatively small cores, which are designed to all work at the same time (“in parallel”) to achieve fast overall processing. This makes them well suited for tasks that require a large number of simple operations which can be done at the same time, rather than one after another.

Traditional GPUs come in two main flavours.

First, there are standalone chips, which often come in add-on cards for large desktop computers. Second are GPUs combined with a CPU in the same chip package, which are often found in laptops and game consoles such as the PlayStation 5. In both cases, the CPU controls what the GPU does.

Why are GPUs so useful for AI?

It turns out GPUs can be repurposed to do more than generate graphical scenes.

Many of the machine learning techniques behind artificial intelligence (AI), such as deep neural networks, rely heavily on various forms of “matrix multiplication”.

This is a mathematical operation where very large sets of numbers are multiplied and summed together. These operations are well suited to parallel processing, and hence can be performed very quickly by GPUs.

What’s next for GPUs?

The number-crunching prowess of GPUs is steadily increasing, due to the rise in the number of cores and their operating speeds. These improvements are primarily driven by improvements in chip manufacturing by companies such as TSMC in Taiwan.

The size of individual transistors – the basic components of any computer chip – is decreasing, allowing more transistors to be placed in the same amount of physical space.

However, that is not the entire story. While traditional GPUs are useful for AI-related computation tasks, they are not optimal.

Just as GPUs were originally designed to accelerate computers by providing specialised processing for graphics, there are accelerators that are designed to speed up machine learning tasks. These accelerators are often referred to as “data centre GPUs”.

Some of the most popular accelerators, made by companies such as AMD and NVIDIA, started out as traditional GPUs. Over time, their designs evolved to better handle various machine learning tasks, for example by supporting the more efficient “brain float” number format.

A photo of an iridescent computer chip against a black background.
NVIDIA’s latest GPUs have specialised functions to speed up the ‘transformer’ software used in many modern AI applications. NVIDIA

Other accelerators, such as Google’s Tensor Processing Units and Tenstorrent’s Tensix Cores, were designed from the ground up for speeding up deep neural networks.

Data centre GPUs and other AI accelerators typically come with significantly more memory than traditional GPU add-on cards, which is crucial for training large AI models. The larger the AI model, the more capable and accurate it is.

To further speed up training and handle even larger AI models, such as ChatGPT, many data centre GPUs can be pooled together to form a supercomputer. This requires more complex software in order to properly harness the available number crunching power. Another approach is to create a single very large accelerator, such as the “wafer-scale processor” produced by Cerebras.

Are specialised chips the future?

CPUs have not been standing still either. Recent CPUs from AMD and Intel have built-in low-level instructions that speed up the number-crunching required by deep neural networks. This additional functionality mainly helps with “inference” tasks – that is, using AI models that have already been developed elsewhere.

To train the AI models in the first place, large GPU-like accelerators are still needed.


Read More: Nvidia’s secret “TrueHDR” tool uses AI for real-time HDR-gaming conversion


It is possible to create ever more specialised accelerators for specific machine learning algorithms. Recently, for example, a company called Groq has produced a “language processing unit” (LPU) specifically designed for running large language models along the lines of ChatGPT.

However, creating these specialised processors takes considerable engineering resources. History shows the usage and popularity of any given machine learning algorithm tends to peak and then wane – so expensive specialised hardware may become quickly outdated.

For the average consumer, however, that’s unlikely to be a problem. The GPUs and other chips in the products you use are likely to keep quietly getting faster.


]]>
https://stuff.co.za/2024/03/06/what-is-a-gpu-and-how-powering-the-ai-boom/feed/ 0
AI: a way to freely share technology and stop it being misused already exists https://stuff.co.za/2024/03/05/ai-a-way-to-freely-share-technology-and/ Tue, 05 Mar 2024 07:21:14 +0000 https://stuff.co.za/?p=190430 There are lots of proposed ways to try to place limits on artificial intelligence (AI), because of its potential to cause harm in society, as well as its benefits.

For example, the EU’s AI Act places greater restrictions on systems based on whether they fall into the category of general purpose and generative AI or are considered to pose limited risk, high risk or an unacceptable risk.

This is a novel and bold approach to mitigating any ill effects. But what if we could adapt some tools that already exist? Software licensing is one well-known model that could be tailored so that they could meet the challenges posed by advanced AI systems.

Open responsible AI licenses (OpenRails) might be part of this answer. AI that is licensed with OpenRail is similar to open-source software. A developer may release their system publicly under the licence. This means that anyone is free to use, adapt and re-share what was originally licensed.

The difference with OpenRail is the addition of conditions on using the AI responsibly. These include not breaking the law, impersonating people without consent or discriminating against people.

Alongside the mandatory conditions, OpenRails can be adapted to include other conditions that are directly relevant to the specific technology. For example, if an AI was created to categorise apples, the developer may specify it should never be used to categorise oranges, as doing so would be irresponsible.

The reason this model can be helpful is that many AI technologies are so general, they could be used for many things. It’s really hard to predict the nefarious ways they might be exploited.

So this model allows developers to help push forward open innovation while reducing the risk that their ideas might be used in irresponsible ways.

Open but responsible

In contrast, proprietary licences are more restrictive on how software can be used and adapted. They are designed to protect the interests of the creators and investors and have helped tech giants like Microsoft to build vast empires by charging for access to their systems.

Due to its broad reach, AI arguably demands a different, more nuanced approach that could promote the openness that drives progress. Currently many big firms are operating proprietary – closed – AI systems. But this could change, as there are several examples of companies using an open-source approach.

Meta’s generative AI system Llama-v2 and the image generator Stable Diffusion are open source. French AI startup Mistral, established in 2023 and now valued at US$2 billion (£1.6 billion), is set to soon openly release its latest model, which is rumoured to have performance comparable to GPT-4 (the model behind Chat GPT).

However, openness needs to be tempered with a sense of responsibility to society, because of the potential risks associated with AI. These include the potential for algorithms to discriminate against peoplereplace jobs and even pose existential threats to humanity.

Huggingface
HuggingFace is the world’s largest AI developer hub. Jesse Joshua Benjamin, Fourni par l’auteur

We should also consider the more humdrum and everyday uses of AI. The technology will increasingly become part of our societal infrastructure, a central part of how we access information, construct opinions, and express ourselves culturally.

Such a universally important technology brings its own kind of risk, distinct from the robot apocalypse, but still very worthy of consideration.

One way to do this is contrast what AI may do in the future, to what free speech does now. The free sharing of ideas is not only crucial to upholding democratic values but it’s also the engine of culture. It facilitates innovation, encourages diversity and allows us to discern truth from falsehood.

The AI models being developed today will likely become a primary means of accessing information. They will shape what we say, what we see, what we hear and, by extension, how we think.

In other words, they will shape our culture in much the same way that free speech has. For this reason, there is a good argument that the fruits of AI innovation should be free, shared and open. And, as it happens, most of it already is.

Limits are needed

On the HuggingFace platform, the world’s largest AI developer hub, there are currently over 81,000 models that are published using “permissive open-source” licences. Just as the right to speak freely overwhelmingly benefits society, this open sharing of AI is an engine for progress.

However, free speech has necessary ethical and legal limits. Making false claims that are harmful to others or expressions of hatred based on ethnicity, religion, or disability are both widely accepted limitations. Providing innovators with a means to find this balance in the realm of AI innovation is what OpenRails do.

For example, deep-learning technology is applied in many worthy domains, but also underpins deepfake videos. The developers probably did not want their work to be used to spread misinformation or create non-consensual pornography.

An OpenRail would have provided them with the ability to share their work with restrictions that would forbid, for example, anything that would violate the law, cause harm, or result in discrimination.

Legally enforceable

Can OpenRAIL licences help us avoid the inevitable ethical dilemmas that AI will pose? Licensing can only go so far, with one limitation being that licences are only as good as the ability to enforce them.

Currently, enforcement would probably be similar to enforcement for music copying and software piracy and would involve the sending of cease and desist letters with the prospect of potential court action. While such measures do not stop piracy, they do discourage it.

Despite limitations there are many practical benefits, licences are well understood by the tech community, are easily scalable, and can be adopted with little effort. This has been recognised by developers, and to date, more than 35,000 models hosted on HuggingFace have adopted OpenRails.


Read More: We’ve been here before: AI promised humanlike machines – in 1958


Ironically given the company name, OpenAI – the company behind ChatGPT – does not license its most powerful AI models openly. Instead, with it’s flagship language models the company operates a closed approach that provides access to the AI to anyone willing to pay, while preventing others from building on, or adapting, the underlying technology.

As with the free speech analogy, the freedom to share AI openly is a right we should hold dearly, but perhaps not absolutely. While not a cure-all, licensing-based approaches such as OpenRail look like a promising piece of the puzzle.


  • Joseph Lindley is a Senior Research Fellow, Lancaster Institute for the Contemporary Arts, Lancaster University
  • Jesse Josua Benjamin is a Research Associate, Faculty of Arts and Social Sciences (FASS), Lancaster University
  • This article first appeared in The Conversation

]]>