Stuff South Africa https://stuff.co.za South Africa's Technology News Hub Tue, 19 Mar 2024 06:44:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 Stuff South Africa South Africa's Technology News Hub clean Do you have 7,513 unread emails in your inbox? Research suggests that’s unwise https://stuff.co.za/2024/03/19/do-you-have-7513-unread-emails-your-inbox/ https://stuff.co.za/2024/03/19/do-you-have-7513-unread-emails-your-inbox/#respond Tue, 19 Mar 2024 06:44:36 +0000 https://stuff.co.za/?p=190919 How do you manage your emails? Are you an “inbox zero” kind of person, or do you just leave thousands of them unread?

Our new study, published today in the journal Information Research, suggests that leaving all your emails in the inbox is likely to leave you dissatisfied with your personal records management.

In an exploratory survey, we asked participants how they dealt with their personal records such as bills, online subscriptions and similar items. Many of these arrive by email.

We found that most respondents left their electronic records in their email. Only half saved items such as bills and other documents to other locations, like their computer or the cloud. But having a disorganised inbox also led to problems, including missing bills and losing track of important correspondence.

The risk of losing track of your emails

Receiving bills, insurance renewals and other household documents by email saves time and money, and reduces unnecessary paper use.

However, there are risks involved if you don’t stay on top of your electronic records. Respondents in our research reported issues such as lapsed vehicle registration, failing to cancel unwanted subscriptions, and overlooking tax deductions because it was too much trouble finding the receipts.

This suggests late fines and other email oversights could be costing people hundreds of dollars each year.

In addition to the financial costs, research suggests that not sorting and managing electronic records makes it more difficult to put together the information needed at tax time, or for other high-stakes situations, such as loan applications.

What did we find?

We surveyed over 300 diverse respondents on their personal electronic records management. Most of them were from Australia, but we also received responses from other countries, such as the United Kingdom, United States, Switzerland, Portugal and elsewhere.

Two-thirds of the respondents used their email to manage personal records, such as bills, receipts, subscriptions and more. Of those, we found that once respondents had dealt with their email, about half of them would sort the emails into folders, while the other half would leave everything in the inbox.

While most sorted their workplace email into folders, they were much less likely to sort their personal email in the same way.

The results also showed that only half (52%) of respondents who left all their email in the inbox were satisfied with their records management, compared to 71% of respondents who sorted their email into folders.

Of the respondents who saved their paperwork in the cloud (Google Drive, iCloud, Dropbox and similar), 83% reported being satisfied with their home records management.

The study was exploratory, so further research will be needed to see if our findings apply more universally. However, our statistical analysis did reveal practices associated with more satisfactory outcomes, and ones that might be better to avoid.

What can go wrong with an inbox-only approach?

Based on the responses, we have identified three main problems with leaving all your email in the inbox.

First, users can lose track of the tasks that need to be done. For example, a bill that needs to be paid could slip down the line unnoticed, drowned by other emails.

Second, relying on search to re-find emails means you need to know exactly what you’re looking for. For example, at tax time searching for charity donation receipts depends on remembering what to search for, as well as the exact wording in the email containing the receipt.


Read More: Stop emailing yourself: the best file sharing options across devices


Third, many bills and statements are not sent as attachments to emails, but rather as hyperlinks. If you change your bank or another service provider, those hyperlinks may not be accessible at a later date. Not being able to access missing payslips from a former employer can also cause issues, as shown by the Robodebt scandal or the recent case of the Australian Tax Office reviving old debts.

4 tips for better records management

When we asked respondents to nominate a preferred location for keeping their personal records, they tended to choose a more organised format than their current behaviour. Ideally, only 8% of the respondents would leave everything in their email inbox, unsorted.

Our findings suggest a set of practices that can help you get on top of your electronic records and prevent stress or financial losses:

  • sort your email into category folders, or save records in folders in the cloud or on a computer
  • download documents that are not attached to emails or sent to you – such as utility bills and all your payslips
  • put important renewals in your calendar as reminders, and
  • delete junk mail and unsubscribe, so that your inbox can be turned into a to-do list.

]]>
https://stuff.co.za/2024/03/19/do-you-have-7513-unread-emails-your-inbox/feed/ 0
High-energy laser weapons: A defence expert explains how they work and what they are used for https://stuff.co.za/2024/03/09/high-energy-laser-weapons-a-defense-expert/ https://stuff.co.za/2024/03/09/high-energy-laser-weapons-a-defense-expert/#respond Sat, 09 Mar 2024 12:00:56 +0000 https://stuff.co.za/?p=190605 Nations around the world are rapidly developing high-energy laser weapons for military missions on land and sea, and in the air and space. Visions of swarms of small, inexpensive drones filling the skies or skimming across the waves are motivating militaries to develop and deploy laser weapons as an alternative to costly and potentially overwhelmed missile-based defenses.

Laser weapons have been a staple of science fiction since long before lasers were even invented. More recently, they have also featured prominently in some conspiracy theories. Both types of fiction highlight the need to understand how laser weapons actually work and what they are used for.

How lasers work

A laser uses electricity to generate photons, or light particles. The photons pass through a gain medium, a material that creates a cascade of additional photons, which rapidly increases the number of photons. All these photons are then focused into a narrow beam by a beam director.

a diagram showing two small vertical rectangles on either end of a large horizontal rectangle that contains a two-headed horizontal arrow
Lasers work by turning electricity into photons and bouncing them back and forth between two mirrors through a special gain material that creates a cascade of many more photons. Shigeru23/WikimediaCC BY-SA

In the decades since the first laser was unveiled in 1960, engineers have developed a variety of lasers that generate photons at different wavelengths in the electromagnetic spectrum, from infrared to ultraviolet. The high-energy laser systems that are finding military applications are based on solid-state lasers that use special crystals to convert the input electrical energy into photons. A key aspect of high-power solid-state lasers is that the photons are created in the infrared portion of the electromagnetic spectrum and so cannot be seen by the human eye.

When it interacts with a surface, a laser beam generates different effects based on its photon wavelength, the power in the beam and the material of the surface. Low-power lasers that generate photons in the visible part of the spectrum are useful as light sources for pointers and light shows at public events. These beams are of such low power that they simply reflect off a surface without damaging it.

Higher-power laser systems are used to cut through biological tissue in medical procedures. The highest-power lasers can heat, vaporize, melt and burn through many different materials and are used in industrial processes for welding and cutting.

In addition to the power level of the laser, the ability to deliver these various effects is determined by the distance between the laser and its target.

Laser weapons

Based in part on the progress made in high-power industrial lasers, militaries are finding an increasing number of uses for high-energy lasers. One key advantage for high-energy laser weapons is that they provide an “infinite magazine.” Unlike traditional weapons such as guns and cannons that have a finite amount of ammunition, a high-energy laser can keep firing as long as it has electrical power.

The U.S. Army is deploying a truck-based high-energy laser to shoot down a range of targets, including drones, helicopters, mortar shells and rockets. The 50-kilowatt laser is mounted on the Stryker infantry fighting vehicle, and the Army deployed four of the systems for battlefield testing in the Middle East in February 2024.

The U.S. Navy has deployed a ship-based high-energy laser to defend against small and fast-moving ocean surface vessels as well as missiles and drones. The Navy installed a 60-kilowatt laser weapon on the destroyer the USS Preble in August 2022.

The Air Force is developing high-energy lasers on aircraft for defensive and offensive missions. In 2010, the Air Force tested a megawatt laser mounted on a modified Boeing 747, hitting a ballistic missile as it was being launched. The Air Force is currently working on a smaller weapon system for fighter aircraft.

Russia appears to be developing a ground-based high-energy laser to “blind” their adversaries’ satellites.

an eight-wheeled military vehicle with a spherical device mounted on top
A U.S. Army Stryker armored fighting vehicle configured with a high-energy laser weapon, the extensions at the top rear of the vehicle. Jim Kendall, U.S. Army

Limitations of laser weapons

One key challenge for militaries using high-energy lasers is the high levels of power needed to create useful effects from afar. Unlike an industrial laser that may be just a few inches from its target, military operations involve significantly larger distances. To defend against an incoming threat, such as a mortar shell or a small boat, laser weapons need to engage their targets before they can inflict any damage.

However, to burn through materials at safe distances requires tens to hundreds of kilowatts of power in the laser beam. The smallest prototype laser weapon draws 10 kilowatts of power, roughly equivalent to an electric car. The latest high-power laser weapon under development draws 300 kilowatts of power, enough to power 30 households. And because high-energy lasers are only 50% efficient at best, they generate a tremendous amount of waste heat that has to be managed.

This means high-energy lasers require extensive power generation and cooling infrastructure that places limits on the types of effects that can be generated from different military platforms. Army trucks and Air Force fighter jets have the least amount of space for high-energy laser weapons, and so these systems are limited to targets that require relatively low power, such as downing drones or disabling missiles. Ships and larger aircraft can accommodate larger high-energy lasers with the potential to burn holes in boats and ground vehicles. Permanent ground-based systems have the least constraints and therefore the highest power, making it potentially feasible to dazzle a distant satellite.

Another important limitation for platform-based high-energy laser weapons relates to the infinite magazine concept. Since the truck, ship or airplane must carry the power source for the laser, and that will limit the power source’s capacity, the lasers can only be used for a limited amount of time before they need to recharge their batteries.

There are also fundamental limits to high-energy laser weapons, including diminished effectiveness in rain, fog and smoke, which scatter laser beams. The laser beams also need to remain locked onto their targets for several seconds in order to inflict damage. Current prototype laser weapons are also proving a challenge to maintain in combat zones.

No fire from the skies

A new type of conspiracy theory has emerged in recent years claiming that nefarious entities have used airborne high-energy lasers to start wildfires in CaliforniaHawaii and Texas. This is highly unlikely for several reasons.

First, the power level needed to ignite vegetation with a high-energy laser from the sky would require a large power source installed on a large aircraft. A plane that size would have been highly visible right before any fires were ignited. Second, in some images that claim to show the fires being started, the laser beams are green. Beams from high-energy lasers are invisible.


Read More: Drone-zapping laser weapons now effective (and cheap) reality


What comes next

In the future, high-energy laser weapons are likely to continue to evolve with increased power levels that will expand the range of targets they can be used against.

Emerging threats posed by low-cost, weaponized drones like those in use in conflicts in the Middle East and Ukraine make it more likely that high-energy lasers will also find nonmilitary applications such as defending the public against terrorist attacks.


  • Iain Boyd is a Director, Center for National Security Initiatives, and Professor of Aerospace Engineering Sciences, University of Colorado Boulder
  • This article first appeared in The Conversation

]]>
https://stuff.co.za/2024/03/09/high-energy-laser-weapons-a-defense-expert/feed/ 0 Why The Pentagon Is Spending Billions To Bring Laser Weapons To The Battlefield nonadult
Demand for computer chips fuelled by AI could reshape global politics and security https://stuff.co.za/2024/03/08/demand-for-computer-chips-fuelled-by-ai/ https://stuff.co.za/2024/03/08/demand-for-computer-chips-fuelled-by-ai/#respond Fri, 08 Mar 2024 07:17:19 +0000 https://stuff.co.za/?p=190573 A global race to build powerful computer chips that are essential for the next generation of artificial intelligence (AI) tools could have a major impact on global politics and security.

The US is currently leading the race in the design of these chips, also known as semiconductors. But most of the manufacturing is carried out in Taiwan. The debate has been fuelled by the call by Sam Altman, CEO of ChatGPT’s developer OpenAI, for a US$5 trillion to US$7 trillion (£3.9 trillion to £5.5 trillion) global investment to produce more powerful chips for the next generation of AI platforms.

The amount of money Altman called for is more than the chip industry has spent in total since it began. Whatever the facts about those numbers, overall projections for the AI market are mind blowing. The data analytics company GlobalData forecasts that the market will be worth US$909 billion by 2030.

Unsurprisingly, over the past two years, the US, China, Japan and several European countries have increased their budget allocations and put in place measures to secure or maintain a share of the chip industry for themselves. China is catching up fast and is subsidising chips, including next-generation ones for AI, by hundreds of billions over the next decade to build a manufacturing supply chain.

Subsidies seem to be the preferred strategy for Germany too. The UK government has announced its plans to invest £100 million to support regulators and universities in addressing challenges around artificial intelligence.

The economic historian Chris Miller, the author of the book Chip War, has talked about how powerful chips have become a “strategic commodity” on the global geopolitical stage.

Despite the efforts by several countries to invest in the future of chips, there is currently a shortage of the types currently needed for AI systems. Miller recently explained that 90% of the chips used to train, or improve, AI systems are produced by just one company.

That company is the Taiwan Semiconductor Manufacturing Company (TSMC). Taiwan’s dominance in the chip manufacturing industry is notable because the island is also the focus for tensions between China and the US.


Read more: The microchip industry would implode if China invaded Taiwan, and it would affect everyone


Taiwan has, for the most part, been independent since the middle of the 20th century. However, Beijing believes it should be reunited with the rest of China and US legislation requires Washington to help defend Taiwan if it is invaded. What would happen to the chip industry under such a scenario is unclear, but it is obviously a focus for global concern.

The disruption of supply chains in chip manufacturing have the potential to bring entire industries to a halt. Access to the raw materials, such as rare earth metals, used in computer chips has also proven to be an important bottleneck. For example, China controls 60% of the production of gallium metal and 80% of the global production of germanium. These are both critical raw products used in chip manufacturing.

And there are other, lesser-known bottlenecks. A process called extreme ultraviolet (EUV) lithography is vital for the ability to continue making computer chips smaller and smaller – and therefore more powerful. A single company in the Netherlands, ASML, is the only manufacturer of EUV systems for chip production.

However, chip factories are increasingly being built outside Asia again – something that has the potential to reduce over-reliance on a few supply chains. Plants in the US are being subsidised to the tune of US$43 billion and in Europe, US$53 billion.

For example, the Taiwanese semiconductor manufacturer TSMC is planning to build a multibillion dollar facility in Arizona. When it opens, that factory will not be producing the most advanced chips that it’s possible to currently make, many of which are still produced by Taiwan.

Moving chip production outside Taiwan could reduce the risk to global supplies in the event that manufacturing were somehow disrupted. But this process could take years to have a meaningful impact. It’s perhaps not surprising that, for the first time, this year’s Munich Security Conference created a chapter devoted to technology as a global security issue, with the discussion of the role of computer chips.

Wider issues

Of course, the demand for chips to fuel AI’s growth is not the only way that artificial intelligence will make major impact on geopolitics and global security. The growth of disinformation and misinformation online has transformed politics in recent years by inflating prejudices on both sides of debates.

We have seen it during the Brexit campaign, during US presidential elections and, more recently, during the conflict in Gaza. AI could be the ultimate amplifier of disinformation. Take, for example, deepfakes – AI-manipulated videos, audio or images of public figures. These could easily fool people into thinking a major political candidate had said something they didn’t.

As a sign of this technology’s growing importance, at the 2024 Munich Security Conference, 20 of the world’s largest tech companies launched something called the “Tech Accord”. In it, they pledged to cooperate to create tools to spot, label and debunk deepfakes.


Read More: What is a GPU? An expert explains the chips powering the AI boom, and why they’re worth trillions


But should such important issues be left to tech companies to police? Mechanisms such as the EU’s Digital Service Act, the UK’s Online Safety Bill as well as frameworks to regulate AI itself should help. But it remains to be seen what impact they can have on the issue.

The issues raised by the chip industry and the growing demand driven by AI’s growth are just one way that AI is driving change on the global stage. But it remains a vitally important one. National leaders and authorities must not underestimate the influence of AI. Its potential to redefine geopolitics and global security could exceed our ability to both predict and plan for the changes.


  • Kirk Chang is a Professor of Management and Technology, University of East London
  • Alina Vaduva is a Director of the Business Advice Centre for Post Graduate Students at UEL, Ambassador of the Centre for Innovation, Management and Enterprise, University of East London
  • This article first appeared in The Conversation

]]>
https://stuff.co.za/2024/03/08/demand-for-computer-chips-fuelled-by-ai/feed/ 0
WhatsApp is working on beefing up your profile picture’s security https://stuff.co.za/2024/02/21/whatsapp-beefing-profile-picture-security/ Wed, 21 Feb 2024 11:57:15 +0000 https://stuff.co.za/?p=189924 Meta’s WhatsApp might still be a little behind the curve in terms of features that other messengers take for granted (message scheduling, when?), but there’s no denying it’s one of the safest around. There’s usually a regular flow of security updates in the pipeline, and now WABetaInfo is reporting the messenger is working on a new feature that’ll block unwanted contacts and others from screenshotting your profile picture.

Better late than never

WhatsApp security profile picture (WABetaInfo)

This isn’t the first time WhatsApp’s attempted to put unconsenting profile picture saving to rest. More than five years ago, WhatsApp removed the ability to download another user’s profile picture. That’s apparently all the attention it paid to the issue, however, as users were simply able to… screenshot the image and save it that way.

That’ll be changing soon, after WABetaInfo spotted the new security feature in the 2.24.4.25 update for Android. When a user attempts to screenshot another’s profile picture, they’ll be greeted with the prompt ‘Can’t take a screenshot due to app restrictions.” Obviously WhatsApp can’t stop bad actors from using a second device to capture the same image, but it should discourage at least a couple of people from taking that extra step.


Read More: Absa launches ChatWallet to let you bank on WhatsApp – here’s how to use it


It’s not yet known whether the feature will be automatically applied to all users, or if it can be turned on and off at the flip of a switch. The app already has certain security features in place that allow users to prevent unwanted contacts from seeing their profile picture altogether, or until certain criteria are met. (You can change this by heading to Settings > Privacy > Profile photo, by the way).

We’ll get a clearer idea of how the feature works once it’s released to the public which, according to WABetaInfo, should be happening “over the coming weeks,” once the beta testers have had their fun.

Source

]]>
How SIM swap scammers can swindle you https://stuff.co.za/2024/02/20/how-sim-swap-scammers-can-swindle-you/ Tue, 20 Feb 2024 08:07:42 +0000 https://stuff.co.za/?p=189846 You’re more than likely walking around with a SIM 24/7… That is, if you have a mobile device of some kind. If you’re here, we’re guessing you do. That’s why you should be more cognizant of SIM swap fraud, how you can be targeted, and what you stand to lose if someone gains access to your SIM and mobile number.

This sneaky scheme targets unsuspecting individuals and aims to hijack their phone numbers and wreak havoc on their personal and financial lives.

Copy, paste

SIM cards basic

SIM swap fraud is a sophisticated form of identity theft. In an ‘attack’, cybercriminals manipulate cellular service providers into transferring a victim’s mobile service to a SIM card under their control. Once the SIM swap is successful, all incoming network traffic, including calls and text messages, is redirected to the scammer’s device.

This grants them access to any accounts linked to the victim’s phone number, bypassing two-factor authentication (2FA) measures and potentially leading to unauthorised access to bank accounts, social media profiles and other platforms you don’t want them to access.

How they gain access

SIM scam cybercriminal

The process of executing a SIM swap attack involves several steps, starting with the scammer gathering personal information about the victim. This information may be collected via phishing scams, data breaches, or social engineering tactics.

Once they’re armed with this info, the fraudster then contacts the victim’s mobile network posing as the legitimate account holder, and requests a SIM card swap.

During this interaction, the scammer will provide convincing details to authenticate their (well… your) identity, such as financial information, device details, personal data, call logs, and account credentials.

How it affects you

SIM cards Sad

Falling victim to a SIM swap can be pretty devastating. The fraudster can infiltrate everything from bank accounts to investment apps and personal cloud data – anything that relies on 2FA for security.

This can lead to unauthorised access to sensitive information, financial loss, and even identity theft. And of course, you’ll lose access to your mobile service, making it difficult to make phone calls or send emails to solve these issues.

Detecting a SIM swap attack early can help mitigate these risks, however. Make sure you know the signs:

  • Sudden inability to make calls or send SMSes due to a sudden loss of network
    connectivity.
  • Notifications from service providers about suspicious account activity or changes.
  • Difficulty accessing online accounts or discovering unauthorised transactions.
  • Unexplained disruptions in mobile service or unusual behaviour on your device.

It’s all about prevention

Protecting yourself from SIM swap fraud requires vigilance and proactive measures. Here are some essential steps you can take to reduce the risk:

Safeguard personal information: Be cautious about sharing sensitive information online, especially on social media platforms. Avoid disclosing details like your address, phone number, full name, or birthdate, which fraudsters could exploit.

Exercise caution online: Be wary of unsolicited calls, emails, or SMSes requesting personal information. Legitimate institutions typically do not solicit sensitive data through these channels. Verify the authenticity of communications before offering up your personal info.

Enhance account security: Utilise robust authentication methods, such as biometric authentication or strong, unique passwords. Consider using reputable password managers to generate and store complex passwords securely.

Monitor account activity: Regularly review your bank and mobile carrier accounts for any suspicious activity. Enable alerts or notifications to receive immediate alerts about account changes or unusual transactions.

Explore alternative authentication methods: Consider using authentication apps or hardware tokens for 2FA instead of relying solely on SMS-based verification. These methods offer greater security and are less susceptible to SIM swap attacks.

]]>
Using AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls https://stuff.co.za/2024/02/09/using-ai-to-monitor-the-internet-for-terror/ Fri, 09 Feb 2024 07:19:01 +0000 https://stuff.co.za/?p=189431 Every minute, millions of social media posts, photos and videos flood the internet. On average, Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload more than 500 hours of video.

This vast ocean of online material needs to be constantly monitored for harmful or illegal content, like promoting terrorism and violence.

The sheer volume of content means that it’s not possible for people to inspect and check all of it manually, which is why automated tools, including artificial intelligence (AI), are essential. But such tools also have their limitations.

The concerted effort in recent years to develop tools for the identification and removal of online terrorist content has, in part, been fuelled by the emergence of new laws and regulations. This includes the EU’s terrorist content online regulation, which requires hosting service providers to remove terrorist content from their platform within one hour of receiving a removal order from a competent national authority.

Behaviour and content-based tools

In broad terms, there are two types of tools used to root out terrorist content. The first looks at certain account and message behaviour. This includes how old the account is, the use of trending or unrelated hashtags and abnormal posting volume.

In many ways, this is similar to spam detection, in that it does not pay attention to content, and is valuable for detecting the rapid dissemination of large volumes of content, which are often bot-driven.

The second type of tool is content-based. It focuses on linguistic characteristics, word use, images and web addresses. Automated content-based tools take one of two approaches.

1. Matching

The first approach is based on comparing new images or videos to an existing database of images and videos that have previously been identified as terrorist in nature. One challenge here is that terror groups are known to try and evade such methods by producing subtle variants of the same piece of content.

After the Christchurch terror attack in New Zealand in 2019, for example, hundreds of visually distinct versions of the livestream video of the atrocity were in circulation.

So, to combat this, matching-based tools generally use perceptual hashing rather than cryptographic hashing. Hashes are a bit like digital fingerprints, and cryptographic hashing acts like a secure, unique identity tag. Even changing a single pixel in an image drastically alters its fingerprint, preventing false matches.

Perceptual hashing, on the other hand, focuses on similarity. It overlooks minor changes like pixel colour adjustments, but identifies images with the same core content. This makes perceptual hashing more resilient to tiny alterations to a piece of content. But it also means that the hashes are not entirely random, and so could potentially be used to try and recreate the original image.

2. Classification

The second approach relies on classifying content. It uses machine learning and other forms of AI, such as natural language processing. To achieve this, the AI needs a lot of examples like texts labelled as terrorist content or not by human content moderators. By analysing these examples, the AI learns which features distinguish different types of content, allowing it to categorise new content on its own.

Once trained, the algorithms are then able to predict whether a new item of content belongs to one of the specified categories. These items may then be removed or flagged for human review.

This approach also faces challenges, however. Collecting and preparing a large dataset of terrorist content to train the algorithms is time-consuming and resource-intensive.

The training data may also become dated quickly, as terrorists make use of new terms and discuss new world events and current affairs. Algorithms also have difficulty understanding context, including subtlety and irony. They also lack cultural sensitivity, including variations in dialect and language use across different groups.

These limitations can have important offline effects. There have been documented failures to remove hate speech in countries such as Ethiopia and Romania, while free speech activists in countries such as EgyptSyria and Tunisia have reported having their content removed.

We still need human moderators

So, in spite of advances in AI, human input remains essential. It is important for maintaining databases and datasets, assessing content flagged for review and operating appeals processes for when decisions are challenged.

But this is demanding and draining work, and there have been damning reports regarding the working conditions of moderators, with many tech companies such as Meta outsourcing this work to third-party vendors.

To address this, we recommend the development of a set of minimum standards for those employing content moderators, including mental health provision. There is also potential to develop AI tools to safeguard the well-being of moderators. This would work, for example, by blurring out areas of images so that moderators can reach a decision without viewing disturbing content directly.

But at the same time, few, if any, platforms have the resources needed to develop automated content moderation tools and employ a sufficient number of human reviewers with the required expertise.

Many platforms have turned to off-the-shelf products. It is estimated that the content moderation solutions market will be worth $32bn by 2031.


Read More: AI: the silent partner in your side hustle


But caution is needed here. Third-party providers are not currently subject to the same level of oversight as tech platforms themselves. They may rely disproportionately on automated tools, with insufficient human input and a lack of transparency regarding the datasets used to train their algorithms.

So, collaborative initiatives between governments and the private sector are essential. For example, the EU-funded Tech Against Terrorism Europe project has developed valuable resources for tech companies. There are also examples of automated content moderation tools being made openly available like Meta’s Hasher-Matcher-Actioner, which companies can use to build their own database of hashed terrorist content.

International organisations, governments and tech platforms must prioritise the development of such collaborative resources. Without this, effectively addressing online terror content will remain elusive.


]]>
Surveillance and the state: South Africa’s proposed new spying law is open for comment – an expert points out its flaws https://stuff.co.za/2024/02/08/surveillance-and-state-south-africa-law-spy/ Thu, 08 Feb 2024 06:52:40 +0000 https://stuff.co.za/?p=189363 In early 2021, the South African Constitutional Court found that the country’s State Security Agency, through its signals intelligence agency, the National Communication Centre, was conducting bulk interception of electronic signals unlawfully.

Bulk interception involves the surveillance of electronic signals, including communication signals and internet traffic, on a very large scale, and often on an untargeted basis. If intelligence agents misuse this capability, it can have a massive, negative impact on the privacy of innocent people.

The court found that there was no law authorising the practice of bulk surveillance and limiting its potential abuse. It ordered that the agency cease such surveillance until there was.

In November 2023, the South African presidency responded to the ruling by tabling a bill to, among other things, plug the gaps identified by the country’s highest court. The General Intelligence Laws Amendment Bill sets out how the surveillance centre, based in Pretoria, the capital city, should be regulated.

I have researched intelligence and surveillance for over a decade and also served on the 2018 High Level Review Panel on the State Security AgencyIn my view, the bill lacks basic controls over how this highly invasive form of surveillance should be used. This compromises citizens’ privacy and increases the potential for the state to repeat previous abuses. I discuss some of these abuses below.

The dangers

Intelligence agencies use bulk interception to put large numbers of people, and even whole populations, under surveillance. This is regardless of whether they are suspected of serious crimes or threats to national security. Their intention is to obtain strategic intelligence about longer term external threats to a country’s security, and that may be difficult to obtain by other means.

Former United States National Security Agency contractor Edward Snowden’s leaks of classified intelligence documents showed how these capabilities had been used to spy on US citizens. The leaks also showed that British intelligence spied on African trade negotiators, politicians and business people to give the UK government and its partners unfair trade advantages.

In the case of South Africa, around 2005, rogue agents in the erstwhile National Intelligence Agency misused bulk interception to spy on senior members of the ruling African National Congress, the opposition, business people and civil servants. This was despite the agency’s mandate being to focus on foreign threats.

These rogue agents were able to abuse bulk interception because there was no law controlling and limiting how these capabilities were to be used. A 2008 commission of inquiry, appointed by then-minister of intelligence Ronnie Kasrilscalled for this law to be enacted. The government refused to do so until it was forced to act by the Constitutional Court ruling.

The government justified its refusal to act by claiming that the National Communication Centre was regulated adequately through the National Strategic Intelligence Act. The court rejected this argument because the act failed to address the regulation of bulk interception directly.

What the Constitutional Court said

The 2021 Constitutional Court judgment did not address whether bulk interception should ever be acceptable as a surveillance practice. However, it appeared to accept the agency’s argument that it was an internationally accepted method of monitoring transnational signals. But the legitimacy of this practice is highly contested internationally. That’s because this form of surveillance usually extends far beyond what is needed to protect national security.

The court indicated that it would want to see a law authorising bulk surveillance that sets out “the nuts and bolts of the Centre’s functions”. The law would also need to spell out in

clear, precise terms the manner, circumstances or duration of the collection, gathering, evaluation and analysis of domestic and foreign intelligence.

The court would also be looking for details on

how these various types of intelligence must be captured, copied, stored, or distributed.

What the amendment bill says

The amendment bill provides for the proper establishment of the National Communication Centre and its functions. This includes the collection and analysis of intelligence from electronic signals, and information security or cryptography. A parliamentary ad hoc committee has set a deadline of 15 February 2024 for public comment.

The bill says, in vague terms, that the centre shall gather, correlate, evaluate and analyse relevant intelligence to identify any threat or potential threat to national security. But it doesn’t provide any of the details the court said it would be looking for. This is a major weakness.

The bill has one strength, though. It states that the surveillance centre needs to seek the permission of a retired judge, assisted by two interception experts, before conducting bulk interception. The judge will be appointed by the president, and the experts by the minister in charge of intelligence. The position is located in the presidency.

However, it does not spell out the bases on which the judge will take decisions. The fact that the judge would be an executive appointment also raises doubts about his or her independence.

Inadequate benchmarking

The bill fails to incorporate international benchmarks on the regulation of strategic intelligence and bulk interception in a democracy. These require that a domestic legal framework provide what the European Court of Human Rights has referred to as “end-to-end” safeguards covering all stages of bulk interception.


Read More: From self-driving cars to military surveillance: quantum computing can help secure the future of AI systems


The European Court has stated that a domestic legal framework should define

  • the grounds on which bulk interception may be authorised
  • the circumstances
  • the procedures to be followed for granting authorisation
  • procedures for selecting, examining and using material obtained from intercepts

The framework should also set out

  • the precautions to be taken when communicating the material to other parties
  • limits on the duration of interception
  • procedures for the storage of intercepted material
  • the circumstances in which such material must be erased and destroyed
  • supervision procedures by an independent authority
  • compliance procedures for review of surveillance once it has been completed.

The bill does not meet these requirements.

Incorporating these details in regulations would not be adequate on its own, as the bill gives the intelligence minister too much power to set the ground rules for bulk interception. These rules are also unlikely to be subjected to the same level of public scrutiny as the bill.

The fact that the presidency is attempting to get away with the most minimal regulation of bulk interception raises doubt about its stated commitment to intelligence reform to limit the scope for abuse, and Parliament needs to correct the bill’s clear deficiencies.


]]>
How to protect your data privacy: A digital media expert provides steps you can take and explains why you can’t go it alone https://stuff.co.za/2024/01/29/protect-your-data-privacy-a-digital-media/ Mon, 29 Jan 2024 07:14:52 +0000 https://stuff.co.za/?p=188911 Perfect safety is no more possible online than it is when driving on a crowded road with strangers or walking alone through a city at night. Like roads and cities, the internet’s dangers arise from choices society has made. To enjoy the freedom of cars comes with the risk of accidents; to have the pleasures of a city full of unexpected encounters means some of those encounters can harm you. To have an open internet means people can always find ways to hurt each other.

But some highways and cities are safer than others. Together, people can make their online lives safer, too.

I’m a media scholar who researches the online world. For decades now, I have experimented on myself and my devices to explore what it might take to live a digital life on my own terms. But in the process, I’ve learned that my privacy cannot come from just my choices and my devices.

This is a guide for getting started, with the people around you, on the way toward a safer and healthier online life.

The threats

The dangers you face online take very different forms, and they require different kinds of responses. The kind of threat you hear about most in the news is the straightforwardly criminal sort of hackers and scammers. The perpetrators typically want to steal victims’ identities or money, or both. These attacks take advantage of varying legal and cultural norms around the world. Businesses and governments often offer to defend people from these kinds of threats, without mentioning that they can pose threats of their own.

A second kind of threat comes from businesses that lurk in the cracks of the online economy. Lax protections allow them to scoop up vast quantities of data about people and sell it to abusive advertisers, police forces and others willing to pay. Private data brokers most people have never heard of gather data from apps, transactions and more, and they sell what they learn about you without needing your approval.

A third kind of threat comes from established institutions themselves, such as the large tech companies and government agencies. These institutions promise a kind of safety if people trust them – protection from everyone but themselves, as they liberally collect your data. Google, for instance, provides tools with high security standards, but its business model is built on selling ads based on what people do with those tools. Many people feel they have to accept this deal, because everyone around them already has.

The stakes are high. Feminist and critical race scholars have demonstrated that surveillance has long been the basis of unjust discrimination and exclusion. As African American studies scholar Ruha Benjamin puts it, online surveillance has become a “new Jim Code,” excluding people from jobs, fair pricing and other opportunities based on how computers are trained to watch and categorize them.

Once again, there is no formula for safety. When you make choices about your technology, individually or collectively, you are really making choices about whom and how you trust – shifting your trust from one place to another. But those choices can make a real difference.

Phase 1: Basic data privacy hygiene

To get started with digital privacy, there are a few things you can do fairly easily on your own. First, use a password manager like Bitwarden or Proton Pass, and make all your passwords unique and complex. If you can remember a password easily, it’s probably not keeping you safe. Also, enable two-factor authentication, which typically involves receiving a code in a text message, wherever you can.

As you browse the web, use a browser like Firefox or Brave with a strong commitment to privacy, and add to that a good ad blocker like uBlock Origin. Get in the habit of using a search engine like DuckDuckGo or Brave Search that doesn’t profile you based on your past queries.

On your phone, download only the apps you need. It can help to wipe and reset everything periodically to make sure you keep only what you really use. Beware especially of apps that track your location and access your files. For Android users, F-Droid is an alternative app store with more privacy-preserving tools. The Consumer Reports app Permission Slip can help you manage how other apps use your data.

Phase 2: Shifting away

Next, you can start shifting your trust away from companies that make their money from surveillance. But this works best if you can get your community involved; if they are using Gmail, and you email them, Google gets your email whether you use Gmail yourself or not. Try an email provider like Proton Mail that doesn’t rely on targeted ads, and see if your friends will try it, too. For mobile chat, Signal makes encrypted messages easy, but only if others are using it with you.


Read More: What is credential stuffing and how can I protect myself? A cybersecurity researcher explains


You can also try using privacy-preserving operating systems for your devices. GrapheneOS and /e/OS are versions of Android that avoid sending your phone’s data to Google. For your computer, Pop!_OS is a friendly version of Linux. Find more ideas for shifting away at science and technology scholar Janet Vertesi’s Opt-Out Project website.

Phase 3: New foundations

If you are ready to go even further, rethink how your community or workplace collaborates. In my university lab, we run our own servers to manage our tools, including Nextcloud for file sharing and Matrix for chat.

This kind of shift, however, requires a collective commitment in how organizations spend money on technology, away from big companies and toward investing in the ability to manage your tools. It can take extra work to build what I call “governable stacks” – tools that people manage and control together – but the result can be a more satisfying, empowering relationship with technology.

Protecting each other

Too often, people are told that being safe online is a job for individuals, and it is your fault if you’re not doing it right. But I think this is a kind of victim blaming. In my view, the biggest source of danger online is the lack of public policy and collective power to prevent surveillance from being the basic business model for the internet.

For years, people have organized “cryptoparties” where they can come together and learn how to use privacy tools. You can also support organizations like the Electronic Frontier Foundation that advocate for privacy-protecting public policy. If people assume that privacy is just an individual responsibility, we have already lost.


]]>
This is how your Data is sold | Apps that don't collect your Data | Data Protection nonadult
The top risks from technology that we’ll be facing by the year 2040 https://stuff.co.za/2024/01/27/the-top-risks-from-technology-that-well/ Sat, 27 Jan 2024 08:00:08 +0000 https://stuff.co.za/?p=188898 Bewilderingly rapid changes are happening in the technology and reach of computer systems. There are exciting advances in artificial intelligence, in the masses of tiny interconnected devices we call the “Internet of Things” and in wireless connectivity.

Unfortunately, these improvements bring potential dangers as well as benefits. To get a safe future we need to anticipate what might happen in computing and address it early. So, what do experts think will happen, and what might we do to prevent major problems?

To answer that question, Our research team from universities in Lancaster and Manchester turned to the science of looking into the future, which is called “forecasting”. No one can predict the future, but we can put together forecasts: descriptions of what may happen based on current trends.

Indeed, long-term forecasts of trends in technology can prove remarkably accurate. And an excellent way to get forecasts is to combine the ideas of many different experts to find where they agree.

We consulted 12 expert “futurists” for a new research paper. These are people whose roles involves long-term forecasting on the effects of changes in computer technology by the year 2040.

Using a technique called a Delphi study, we combined the futurists’ forecasts into a set of risks, along with their recommendations for addressing those risks.

Software concerns

The experts foresaw rapid progress in artificial intelligence (AI) and connected systems, leading to a much more computer-driven world than nowadays. Surprisingly, though, they expected little impact from two much hyped innovations: Blockchain, a way to record information that makes it impossible or difficult for the system to be manipulated, they suggested, is mostly irrelevant to today’s problems; and Quantum computing is still at an early stage and may have little impact in the next 15 years.

The futurists highlighted three major risks associated with developments in computer software, as follows.

AI Competition leading to trouble

Our experts suggested that many countries’ stance on AI as an area where they want to gain a competitive, technological edge will encourage software developers to take risks in their use of AI. This, combined with AI’s complexity and potential to surpass human abilities, could lead to disasters.

For example, imagine that shortcuts in testing lead to an error in the control systems of cars built after 2025, which goes unnoticed amid all the complex programming of AI. It could even be linked to a specific date, causing large numbers of cars to start behaving erratically at the same time, killing many people worldwide.

Control systems for advanced cars could be vulnerable to software errors.

Generative AI

Generative AI may make truth impossible to determine. For years, photos and videos have been very difficult to fake, and so we expect them to be genuine. Generative AI has already radically changed this situation. We expect its ability to produce convincing fake media to improve so it will be extremely difficult to tell whether some image or video is real.

Supposing someone in a position of trust – a respected leader, or a celebrity – uses social media to show genuine content, but occasionally incorporates convincing fakes. For those following them, there is no way to determine the difference – it will be impossible to know the truth.

Invisible cyber attacks

Finally, the sheer complexity of the systems that will be built – networks of systems owned by different organisations, all depending on each other – has an unexpected consequence. It will become difficult, if not impossible, to get to the root of what causes things to go wrong.

Imagine a cyber criminal hacking an app used to control devices such as ovens or fridges, causing the devices all to switch on at once. This creates a spike in electricity demand on the grid, creating major power outages.


Read More: Mac at 40: User experience was the innovation that launched a technology revolution


The power company experts will find it challenging to identify even which devices caused the spike, let alone spot that all are controlled by the same app. Cyber sabotage will become invisible, and impossible to distinguish from normal problems.

Software jujitsu

The point of such forecasts is not to sow alarm, but to allow us to start addressing the problems. Perhaps the simplest suggestion the experts suggested was a kind of software jujitsu: using software to guard and protect against itself. We can make computer programs perform their own safety audits by creating extra code that validates the programs’ output – effectively, code that checks itself.

Similarly, we can insist that methods already used to ensure safe software operation continue to be applied to new technologies. And that the novelty of these systems is not used as an excuse to overlook good safety practice.

Strategic solutions

But the experts agreed that technical answers alone will not be enough. Instead, solutions will be found in the interactions between humans and technology.

We need to build up the skills to deal with these human technology problems, and new forms of education that cross disciplines. And governments need to establish safety principles for their own AI procurement and legislate for AI safety across the sector, encouraging responsible development and deployment methods.

These forecasts give us a range of tools to address the possible problems of the future. Let us adopt those tools, to realise the exciting promise of our technological future.


]]>
Businesses beware: Malinformation has entered the chat https://stuff.co.za/2024/01/22/businesses-beware-malinformation-has-entered-the-chat/ Mon, 22 Jan 2024 07:56:41 +0000 https://stuff.co.za/?p=188648 It is very cynical to think that no matter how bad things are, they can always get worse. But in some cases, it’s true: today, in addition to being exposed to misinformation and disinformation, we now have ‘malinformation’ to deal with, and it’s something that not just business, but society at large, needs to be more aware of.

For business in particular, though, it can’t just be ignored: businesspeople simply must wrap their heads around this potential threat, especially with the rise of generative AI (GenAI) that can be roped in to create all manner of malicious and convincing campaigns that target businesses.

Malinformation poses such a threat, Gartner predicts that by 2028, businesses worldwide will spend over $30 billion every year to fight it.

What is Malinformation?

Basically, malinformation is truthful information that is used to deceive, harm, or manipulate – often with the goal of making money. The nuance here is that unlike misinformation, which is inaccurate information shared without harmful intent, and disinformation, which is false information shared with the intent to pull the wool over people’s eyes, malinformation is based on truth but deliberately twisted to cause harm.

An example of malinformation in action

Imagine a scenario where a company releases accurate but selective financial information about a competitor to make people think it’s financially unstable. They might share real data about that company’s increased debt levels without also mentioning the corresponding growth in assets and revenue that came about as a result – things that any good businessperson knows would justify taking on additional debt.

This limited view of that company’s finances could feasibly lead to a drop in investor confidence, cause its stock price to drop, and harm the company’s reputation, even though the information shared was technically correct. Painting misleading pictures with selective use of facts and truth is what malinformation is all about.

The business impact

There are several reasons to be vigilant against malinformation. First, it can cause brand damage, which could cause customers to lose faith in your company and take their money elsewhere. It can also stir up trouble inside your organisation and lead to employee disengagement and conflict, especially if sensitive information is used.

And then there are the regulatory and legal challenges it could lead to if the malinformation campaign against you involves the misuse of data or breaches of privacy laws. None of these are things businesses want or need.

Gartner’s outlook on malinformation

Research firm Gartner has plenty to say about malinformation. They predict that by 2028, the spending to fight malinformation will eat into marketing and cybersecurity budgets, cannibalising them by as much as 10%. This is an indication of how scary the threat of malinformation is, and how seriously businesses need to take it.

Gartner also believes that by 2027, 45% of CISOs will see their roles expand beyond cybersecurity, and have them tackling issues like malinformation due to the inevitable regulatory pressures that will emerge as well as the expanded attack surface. If you’re a CISO or you play a similar role, expect the fight against malinformation to land in your lap.

Interestingly, Gartner anticipates malinformation to cause knowledge workers to unionise more, with a 1,000% increase in unionisation by 2028. They think the change will be due to the increased adoption of GenAI and the concerns it raises about job security and ethics.

Tackling the challenge

Oh, great, yet another thing to worry about, right? You will be happy to know that good advice on how to tackle this soon-to-be-scourge already exists, and it’s right out of the ‘effective cybersecurity’ and ‘cybersecurity best practices’ handbook.

  1. Lead Responsibly: Assigning a dedicated executive, such as the CISO, to oversee efforts against malinformation is a must. This role should encompass monitoring, prevention, and response strategies.
  2. Educate Employees: Raising awareness about the nature and risks of malinformation among staff is essential. Training sessions can help employees identify and report potential instances of malinformation.
  3. Beef Up Your Security: Reviewing and improving data security and privacy practices can prevent bad actors from accessing sensitive information that could be used maliciously.
  4. Monitoring and Response Plans: Be sure to encourage vigilance among staff, but also have a good and effective response plan in place in case the worst happens anyway.
  5. Keep Everyone Updated: Regular communication with customers, partners, and regulators about how the company is addressing malinformation can help maintain trust and transparency.

Preparation and understanding

As if business isn’t complex enough, we must now deal with malinformation as well. However, because it’s a fact that business increasingly relies on digital technologies, and the bad people who are out to make a quick buck at your expense are highly motivated to keep plying their trade, understanding and preparing for this relatively new risk is more important than ever.

By recognising the seriousness of the issue, leading responsibly, and putting effective counter-strategies in place, businesses can protect themselves against the damaging effects of malinformation.

And yes, it’s just another thing to add to the list of challenges facing businesses in the modern era, but maybe, just maybe, it will lead to more effective data privacy and protection strategies down the line, and we will one day look back on it all and have a laugh…

Maybe. But while it remains a contemporary threat, it will likely pay to be prepared.

]]>