Stuff South Africa https://stuff.co.za South Africa's Technology News Hub Mon, 18 Mar 2024 07:10:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 Stuff South Africa South Africa's Technology News Hub clean Something felt ‘off’ – how AI messed with human research, and what we learned https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/ https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/#respond Mon, 18 Mar 2024 07:10:19 +0000 https://stuff.co.za/?p=190880 All levels of research are being changed by the rise of artificial intelligence (AI). Don’t have time to read that journal article? AI-powered tools such as TLDRthis will summarise it for you.

Struggling to find relevant sources for your review? Inciteful will list suitable articles with just the click of a button. Are your human research participants too expensive or complicated to manage? Not a problem – try synthetic participants instead.

Each of these tools suggests AI could be superior to humans in outlining and explaining concepts or ideas. But can humans be replaced when it comes to qualitative research?

This is something we recently had to grapple with while carrying out unrelated research into mobile dating during the COVID-19 pandemic. And what we found should temper enthusiasm for artificial responses over the words of human participants.

Encountering AI in our research

Our research is looking at how people might navigate mobile dating during the pandemic in Aotearoa New Zealand. Our aim was to explore broader social responses to mobile dating as the pandemic progressed and as public health mandates changed over time.

As part of this ongoing research, we prompt participants to develop stories in response to hypothetical scenarios.

In 2021 and 2022 we received a wide range of intriguing and quirky responses from 110 New Zealanders recruited through Facebook. Each participant received a gift voucher for their time.

Participants described characters navigating the challenges of “Zoom dates” and clashing over vaccination statuses or wearing masks. Others wrote passionate love stories with eyebrow-raising details. Some even broke the fourth wall and wrote directly to us, complaining about the mandatory word length of their stories or the quality of our prompts.

A human-generated story about dating during the pandemic.

These responses captured the highs and lows of online dating, the boredom and loneliness of lockdown, and the thrills and despair of finding love during the time of COVID-19.

But, perhaps most of all, these responses reminded us of the idiosyncratic and irreverent aspects of human participation in research – the unexpected directions participants go in, or even the unsolicited feedback you can receive when doing research.

But in the latest round of our study in late 2023, something had clearly changed across the 60 stories we received.

This time many of the stories felt “off”. Word choices were quite stilted or overly formal. And each story was quite moralistic in terms of what one “should” do in a situation.

Using AI detection tools, such as ZeroGPT, we concluded participants – or even bots – were using AI to generate story answers for them, possibly to receive the gift voucher for minimal effort.

Moralistic and stilted: an AI-generated story about dating during the pandemic.

Contrary to claims that AI can sufficiently replicate human participants in research, we found AI-generated stories to be woeful.

We were reminded that an essential ingredient of any social research is for the data to be based on lived experience.

Is AI the problem?

Perhap the biggest threat to human research is not AI, but rather the philosophy that underscores it.

It is worth noting the majority of claims about AI’s capabilities to replace humans come from computer scientists or quantitative social scientists. In these types of studies, human reasoning or behaviour is often measured through scorecards or yes/no statements.

This approach necessarily fits human experience into a framework that can be more easily analysed through computational or artificial interpretation.

In contrast, we are qualitative researchers who are interested in the messy, emotional, lived experience of people’s perspectives on dating. We were drawn to the thrills and disappointments participants originally pointed to with online dating, the frustrations and challenges of trying to use dating apps, as well as the opportunities they might create for intimacy during a time of lockdowns and evolving health mandates.


Read More: Emotion-tracking AI on the job: Workers fear being watched – and misunderstood


In general, we found AI poorly simulated these experiences.

Some might accept generative AI is here to stay, or that AI should be viewed as offering various tools to researchers. Other researchers might retreat to forms of data collection, such as surveys, that might minimise the interference of unwanted AI participation.

But, based on our recent research experience, we believe theoretically-driven, qualitative social research is best equipped to detect and protect against AI interference.

There are additional implications for research. The threat of AI as an unwanted participant means researchers will have to work longer or harder to spot imposter participants.

Academic institutions need to start developing policies and practices to reduce the burden on individual researchers trying to carry out research in the changing AI environment.

Regardless of researchers’ theoretical orientation, how we work to limit the involvement of AI is a question for anyone interested in understanding human perspectives or experiences. If anything, the limitations of AI reemphasise the importance of being human in social research.


  • Alexandra Gibson is a Senior Lecturer in Health Psychology, Te Herenga Waka — Victoria University of Wellington
  • Alex Beattie is a Research Fellow, School of Health, Te Herenga Waka — Victoria University of Wellington
  • This article first appeared in The Conversation

]]>
https://stuff.co.za/2024/03/18/how-ai-messed-with-human-research-what-we/feed/ 0
What happens when we outsource boring but important work to AI? Research shows we forget how to do it ourselves https://stuff.co.za/2024/02/26/outsource-boring-important-work-to-ai-auto/ Mon, 26 Feb 2024 07:13:25 +0000 https://stuff.co.za/?p=190089 In 2009, an Air France jet crashed into the ocean, leaving no survivors. The plane’s autopilot system shut down and the pilots, having become reliant on their computerised assistant, were unable to correct the situation manually.

In 2015, a bus driver in Europe typed the wrong destination into his GPS device and cheerfully took a group of Belgian tourists on a 1,200 kilometre detour in the wrong direction.

In 2017, in a decision later overturned on appeal, US prosecutors who had agreed to release a teenager on probation abruptly changed their minds because an algorithm ruled the defendant “high risk”.

These are dramatic examples, but they are far from isolated. When we outsource cognitive tasks to technology – such as flying a plane, navigating, or making a judgement – research shows we may lose the ability to perform those tasks ourselves. There is even a term for our tendency to forget information that is available through online search engines: the Google effect.

As new AI technologies promise to automate an increasing range of activities, the risk of “skill erosion” is growing. Our research shows how it can happen – and suggests ways to keep hold of the expertise you need, even when you don’t need it every day.

Skill erosion can cripple an organisation

My research shows the risk of skill erosion is easily overlooked. In a recent study, my team and I examined skill erosion in an accounting company.

The company had recently stopped using software that automated much of its fixed-asset accounting service. However, the accountants found themselves unable to carry out the task without it. Years of over-reliance on the software had eroded their expertise, and ultimately, they had to relearn their fixed-asset accounting skills.

While the software was rule-based (it did not use machine learning or “AI”), it was “smart” enough to track depreciation and produce reports for many tax and financial purposes. These are tasks that human accountants found very complex and tedious.

The company only became aware of skill erosion after a client found errors in the accounting team’s manual reports. With its accountants lacking sufficient expertise, the company had to commission the software provider to fix the errors.

How skill erosion happens

We found that a lack of mindfulness about the automation-supported task had led to skill erosion. The old saying, “use it or lose it”, applies to cognitively intense work as much as to anything else.

The accountants were not concerned about outsourcing their thinking to the software, since it operated almost flawlessly. In other words, they fell prey to “automation complacency”: the assumption that “all is well” while ignoring potential risks.

This had three major consequences:

  1. they lost their awareness of what automation was doing
  2. they lost the incentive to maintain and update relevant knowledge (such as tax legislation), because the vendor and software did that for them
  3. as the software was reliable, they no longer bothered to check the outgoing reports for accuracy.

How to maintain your skills

So, how do you prevent complacency while using AI and other automated systems? Here are three tips:

  1. pay attention to what the system is doing – what inputs are used, for what purpose, and what might affect its suggestions
  2. keep your competence up to date (especially if you are legally accountable for the outcomes)
  3. critically assess the results, even if the final outcomes appear satisfactory.

What would this look like in practice? Here’s an everyday example: driving with the help of an AI-powered navigation app.

Instead of blindly following the app’s instructions, pay attention to road signs and landmarks, and be aware of what you are doing even when guided by the app.

Study the map and suggested route before driving to increase your “domain knowledge”, or understanding of what is around the route. This helps you relate your specific path to the broader environment, which will be helpful if you get lost or want to find alternative routes.

When you reach your destination, reflect on the route the app suggested: was it fast, was it safe, was it enjoyable? If not, consider taking a different route next time, even if the app suggests otherwise.

Is AI a necessary companion?

The case of the accounting firm also raises a bigger question: what skills are relevant and worth maintaining, and which ones should we relinquish to automation?

There is no universal answer, as professional skills change across time, jurisdictions, industries, cultures and geographical locations. However, it is a question we will have to contend with as AI takes over activities once considered unable to be automated.


Read More: AI has a large and growing carbon footprint, but there are potential solutions on the horizon


Despite the struggles, the accounting manager in our case study believes the automated software is highly beneficial. In his view, his team just got caught off guard by complacency.

In a world focused on efficiency and annual or quarterly targets, organisations favour solutions that improve things in the short term, even if they have negative long-term side effects. This is what happened in the accounting case: efficiency gains overshadowed abstract concerns about expertise, until problems ensued.

This does not mean that we should avoid AI. Organisations cannot afford to miss out on the opportunities it presents. However, they should also be aware of the risk of skill erosion.


]]>
The fight against antibiotic resistance is growing more urgent, but artificial intelligence can help https://stuff.co.za/2023/02/13/antibiotic-fight-artificial-intelligence/ Mon, 13 Feb 2023 07:16:42 +0000 https://stuff.co.za/?p=160659 Since the discovery of penicillin in the late 1920s, antibiotics have “revolutionized medicine and saved millions of lives.” Unfortunately, the effectiveness of antibiotics is now threatened by the increase of antibiotic-resistant bacteria globally.

Antibiotic-resistant infections cause the deaths of up to 1.2 million people annually, making them one of the leading causes of death.

There are several factors contributing to this crisis of resistance to antibiotics. These include overusing and misusing antibiotics in treatments. In addition, pharmaceutical companies are over-regulated and disincentivized from developing new drugs.

The World Health Organization estimates that 10 million people will die from such infections by the year 2050.

The impacts of antibiotic-resistant infections are wide-ranging. In the absence of effective prevention and treatment for bacterial infections, medical procedures such as organ transplants, chemotherapy and caesarean sections become far riskier. That’s because the severity of bacteria-related infections is increasing and untreated infections can cause a variety of health problems.

Discovering new antibiotics

Antibiotics treat illnesses by attacking the bacteria that cause them by destroying them or preventing them from reproducing.

The discovery of new antibiotics has the potential to save millions of lives. The last discovery of a novel class of antibiotics was in 1984. But it’s not easy to find a truly new antibiotic: only one out of every 15 antibiotics that enter pre-clinical development reach patients.

Developing a new drug is a costly, and often lengthy process. Also, the process of bringing novel drugs to the market and making them accessible presents formidable challenges.

This is where artificial intelligence (AI) comes into play, because it allows researchers to quickly and accurately design and assess potential drugs.

The role of AI in drug design

There has been an explosion in research in recent years in the use of AI for drug design and discovery. AI can identify new antibiotics that are structurally distinct from currently available ones and effective against a range of bacteria.

In order to discover more effective antibiotics, we need to understand the structural basis of resistance, and this understanding enables rational design principles. Developing effective second-generation antibiotics often involves optimizing first-generation drugs.

In drug development, a significant amount of money is spent developing and evaluating each generation of compounds. Researchers can use AI tools to teach computers themselves to find quick and cheap ways of discovering such novel medications.

Artificial intelligence is already showing promising results in finding new antibiotics. In 2019, researchers used a deep learning approach to identify the wide-spectrum antibiotic Halicin. Halicin had previously failed clinical trials as a treatment for diabetes, but AI suggested a different application.

Given the early identification of such a potentially strong antibiotic using artificial intelligence, a large number of such broad-spectrum antibiotics that could be effective against a range of bacteria might be identified. These drugs still need to undergo clinical trials.

Researchers at the U.S. National Institutes of Health harnessed AI’s predictive power to demonstrate AI’s potential to accelerate the process of selecting future antibiotics.

AI can be trained to screen and discover new drugs much faster — our lab at Concordia University is using this approach to identify antibiotics that would target bacterial RNA.

Algorithmic learning

Researchers design an algorithm that uses data from databases like ZINC (a collection of commercially available chemicals that can be used for virtual screening) to figure out how molecules and their properties relate. The AI models extract information from the database to analyze their patterns.

The models created by the algorithm are trained on pre-existing data. AI can rapidly sift through huge amounts of data to understand important patterns in the content or structure of a molecule.

We have seen the potential of current models to correctly predict how bacterial proteins and anti-bacterial agents would interact. But in order to maximize AI’s predictive capabilities, further refinement will still be required.


Read More: Artificial intelligence in South Africa comes with special dilemmas – plus the usual risks


Limitations of AI

Researchers haven’t yet explored the full potential of AI models. With further developments, like increased computing power, AI can become an important tool in science. The development of AI in drug discovery research, as well as finding new antibiotics to treat bacterial infections is a work in progress.

The ability of artificial intelligence to predict and accurately identify leads has shown promising results.

Even when powered by powerful AI approaches, finding new drugs will not be easy. We need to understand that AI is a tool that contributes to research by identifying or predicting an outcome of a research question.

AI is implemented in a number of industries today, and is already changing the world. But it’s not a replacement for a scientist or doctor. AI can help the researcher to enhance or fast-track the process of drug discovery.

Even though we still have a way to go before we can fully utilize this method, there is no doubt that AI will significantly change how drugs are discovered and developed.

]]>
Twitter’s new data fees leave scientists scrambling for funding – or cutting research https://stuff.co.za/2023/02/10/twitters-data-fees-scientists-scrambling/ Fri, 10 Feb 2023 07:12:08 +0000 https://stuff.co.za/?p=160559 Twitter is ending free access to its application programming interface, or API. An API serves as a software “middleman” allowing two applications to talk to each other. An API is an accessible way to collect and share data within and across organizations. For example, researchers at universities unaffiliated with Twitter can collect tweets and other data from Twitter through their API.

Starting Feb. 9, 2023, those wanting access to Twitter’s API will have to pay. The company is looking for ways to increase revenue to reverse its financial slide, and Elon Musk claimed that the API has been abused by scammers. This cost is likely to hinder the research community that relies on the Twitter API as a data source.

The Twitter API launched in 2006, allowing those outside of Twitter access to tweets and corresponding metadata, information about each tweet such as who sent it and when and how many people liked and retweeted it. Tweets and metadata can be used to understand topics of conversation and how those conversations are “liked” and shared on the platform and by whom.

As a scientist and director of a research lab focused on collecting and analyzing posts from social media platforms, I have relied on the Twitter API to collect tweets pertinent to public health for over a decade. My team has collected more than 80 million observations over the past decade, publishing dozens of papers on topics from adolescents’ use of e-cigarettes to misinformation about COVID-19.

Twitter has announced that it will allow bots that it deems provide beneficial content to continue unpaid access to the API, and that the company will offer a “paid basic tier,” but it’s unclear whether those will be helpful to researchers.


Read More: Creators can now earn ad revenue on Twitter (but there’s a catch)


Blocking out and narrowing down

Twitter is a social media platform that hosts interesting conversations across a variety of topics. As a result of free access to the Twitter API, researchers have followed these conversations to try to better understand public attitudes and behaviors. I’ve treated Twitter as a massive focus group where observations – tweets – can be collected in near real time at relatively low cost.

The Twitter API has allowed me and other researchers to study topics of importance to society. Fees are likely to narrow the field of researchers who can conduct this work, and narrow the scope of some projects that can continue. The Coalition for Independent Technology Research issued a statement calling on Twitter to maintain free access to its API for researchers. Charging for access to the API “will disrupt critical projects from thousands of journalists, academics and civil society actors worldwide who study some of the most important issues impacting our societies today,” the coalition wrote.

The financial burden will not affect all academics equally. Some scientists are positioned to cover research costs as they arise in the course of a study, even unexpected or unanticipated costs. In particular, scientists at large research-heavy institutions with grant budgets in the millions of dollars are likely to be able to cover this kind of charge.

However, many researchers will be unable to cover the as yet unspecified costs of the paid service because they work on fixed or limited budgets. For example, doctoral students who rely on the Twitter API for data for their dissertations may not have additional funding to cover this charge. Charging for access to the Twitter API will ultimately reduce the number of participants working to understand the world around us.

The terms of Twitter’s paid service will require me and other researchers to narrow the scope of our work, as pricing limits will make it too expensive to continue to collect as much data as we would like. As the amount of data requested goes up, the cost goes up.

We will be forced to forgo data collection on some topic areas. For example, we collect a lot of tobacco-related conversations, and people talk about tobacco by referencing the behavior – smoking or vaping – and also by referencing a product, like JUUL or Puff Bar. I add as many terms as I can think of to cast a wide net. If I’m going to be charged per word, it will force me to rethink how wide a net I cast. This will ultimately reduce our understanding of issues important to society.

Difficult adjustments

Costs aside, many academic institutions are likely to have a difficult time adapting to these changes. For example, most universities are slow-moving bureaucracies with a lot of red tape. To enter into a financial relationship or complete a small purchase may take weeks or months. In the face of the impending Twitter API change, this will likely delay data collection and potential knowledge.

Unfortunately, everyone relying on the Twitter API for data was given little more than a week’s notice of the impending change. This short period has researchers scrambling as we try to prepare our data infrastructures for the changes ahead and make decisions about which topics to continue studying and which topics to abandon.

If the research community fails to properly prepare, scientists are likely to face gaps in data collection that will reduce the quality of our research. And in the end that means a loss of knowledge for the world.

  • Jon-Patrick Allem is an Assistant Professor of Research in Population and Public Health Sciences, University of Southern California
  • This article first appeared on The Conversation

]]>
ChatGPT could be a game-changer for marketers, but it won’t replace humans any time soon https://stuff.co.za/2023/01/23/chatgpt-could-be-game-changer-for-marketers/ https://stuff.co.za/2023/01/23/chatgpt-could-be-game-changer-for-marketers/#comments Mon, 23 Jan 2023 07:00:59 +0000 https://stuff.co.za/?p=159390 The recent release of the ChatGPT chatbot in November 2022 has generated significant public interest. In essence, ChatGPT is an AI-powered chatbot allowing users to simulate human-like conversations with an AI.

GPT stands for Generative Pre-trained Transformer, a language processing model developed by the American artificial intelligence company OpenAI. The GPT language model uses deep learning to produce human-like responses. Deep learning is a branch of machine learning that involves training artificial neural networks to mimic the complexity of the human brain, to produce human-like responses.

ChatGPT has a user-friendly interface that utilizes this technology, allowing users to interact with it in a conversational manner.

In light of this new technology, businesses and consumers alike have shown great interest in how such an innovation could revolutionize marketing strategies and customer experiences.

What’s so special about ChatGPT?

What sets ChatGPT apart from other chatbots is the size of its dataset. Chatbots are usually trained on a smaller dataset in a rule-based manner designed to answer specific questions and conduct certain tasks.

ChatGPT, on the other hand, is trained on a huge dataset — 175 billion parameters and 570 gigabytes — and is able to perform a range of tasks in different fields and industries. 570GB is equivalent to over 385 million pages on Microsoft Word.

Given the amount of the data, ChatGPT can carry out different language-related activities which includes answering questions in different fields and sectors, providing answers in different languages and generating content.

Friend or foe to marketers?

While ChatGPT may be a tremendous tool for marketers, it is important to understand the realistic possibilities and expectations of it to get the most value from it.

Traditionally, with the emergence of new technologies, consumers tend to go through Gartner’s hype cycle. In essence, Gartner’s cycle explains the process people go through when adopting a new technology.

The cycle starts with the innovation trigger and peak of inflated triggers stages when consumers get enthusiastic about new technology and expectations start to build. Then consumers realize the pitfalls of the technology, creating a gap between expectations and reality. This is called the trough of disillusionment.

This is followed by the slope of enlightenment when consumers start to understand the technology and use it more appropriately and reasonably. Finally, the technology becomes widely adopted and used as intended during the plateau of productivity stage.

With the current public excitement surrounding ChatGPT, we appear to be nearing the peak of inflated triggers stage. It’s important for marketers to set realistic expectations for consumers and navigate the integration of ChatGPT to mitigate the affects of the trough of disillusionment stage.

Possibilities of ChatGPT

In its current form, ChatGPT cannot replace the human factor in marketing, but it could support content creation, enhance customer service, automate repetitive tasks and support data analysis.

Supporting content creation: Marketers may use ChatGPT to enhance existing content by using it to edit written work, make suggestions, summarize ideas and improve overall copy readability. Additionally, ChatGPT may enhance search engine optimization strategy by examining ideal keywords and tags.

Enhancing customer service: Businesses may train ChatGPT to respond to frequently asked questions and interact with customers in a human-like conversation. Rather than replacing the human factor, ChatGPT could provide 24/7 customer support. This could optimize business resources and enhance internal processes by leaving high-impact and sensitive tasks to humans. ChatGPT can also be trained in different languages, further enhancing customer experience and satisfaction.

Automating repetitive marketing tasks: According to a 2015 HubSpot report, marketers spent a significant amount of their time on repetitive tasks, such as sending emails and creating social media posts. While part of that challenge has been addressed with customer relationship management software, ChatGPT may enhance this by providing an added layer of personalization through the generation of creative content.

Additionally, ChatGPT may be helpful in other tasks, such as product descriptions. With access to a wealth of data, ChatGPT would be able to frequently update and adjust product descriptions, allowing marketers to focus on higher-impact tasks.


Read More: ChatGPT: students could use AI to cheat, but it’s a chance to rethink assessment altogether


Limitations of ChatGPT

While the wide range of possibilities for enhancing marketing processes with ChatGPT are enticing, it is important for businesses to know about some key limitations and when to limit or avoid using ChatGPT in business operations.

Emotional intelligence: ChatGPT provides a state of the art human-like response and content. However, it is important to be aware that the tool is only human-like. Similar to traditional challenges with chatbots, the degree of human-likeness will be essential for process enhancement and content creation.

Marketers could use ChatGPT to enhance customer experience, but without humans to provide relevancy, character, experience and personal connection, it will be challenging to fully capitalize on ChatGPT. Relying on ChatGPT to build customer connections and engagement without the involvement of humans may diminish meaningful customer connection instead of enhancing it.

Accuracy: While the marketing content may appear logical, it is important to note that ChatGPT is not error free and may provide incorrect and illogical answers. Marketers need to review and validate the content generated by ChatGPT to avoid possible errors and ensure consistency with brand message and image.

Creativity: Relying on ChatGPT for creative content may cause short- and long-term challenges. ChatGPT lacks the lived experience of individuals and understanding the complexity of human nature. Over-relying on ChatGPT may limit creative abilities, so it should be used to support ideation and enhance existing content while still allowing room for human creativity.

Humans are irreplaceable

While ChatGPT has the potential to enhance marketing effectiveness, businesses should only use the technology as a tool to assist humans, not replace them. ChatGPT could provide creative content and support content ideation. However, the human factor is still essential for examining outputs and creating marketing messages that are consistent with a firm’s business strategy and vision.

A business that does not have a strong marketing strategy before integrating ChatGPT remains at a competitive disadvantage. However, with appropriate marketing strategies and plans, ChatGPT could effectively enhance and support existing marketing processes.

  • Omar H. Fares is a Lecturer in the Ted Rogers School of Retail Management, Toronto Metropolitan University
  • This article first appeared on The Conversation

]]>
https://stuff.co.za/2023/01/23/chatgpt-could-be-game-changer-for-marketers/feed/ 1
You’ve likely heard of the brain’s gray matter – here’s why the white matter is important too https://stuff.co.za/2022/05/08/youve-likely-heard-of-the-brains-gray-matter/ Sun, 08 May 2022 13:20:40 +0000 https://stuff.co.za/?p=145656 Who has not contemplated how a memory is formed, a sentence generated, a sunset appreciated, a creative act performed or a heinous crime committed?

The human brain is a three-pound organ that remains largely an enigma. But most people have heard of the brain’s gray matter, which is needed for cognitive functions such as learning, remembering and reasoning.

More specifically, gray matter refers to regions throughout the brain where nerve cells – known as neurons – are concentrated. The region considered most important for cognition is the cerebral cortex, a thin layer of gray matter on the brain’s surface.

But the other half of the brain – the white matter – is often overlooked. White matter lies below the cortex and also deeper in the brain. Wherever it is found, white matter connects neurons within the gray matter to each other.

I am a professor of neurology and psychiatry and the director of the behavioral neurology section at the University of Colorado Medical School. My work involves the evaluation, treatment and investigation of older adults with dementia and younger people with traumatic brain injury.

Finding out how these disorders affect the brain has motivated many years of my study. I believe that understanding white matter is perhaps a key to understanding these disorders. But so far, researchers have generally not given white matter the attention it deserves.

Figuring out the white matter

This lack of recognition largely stems from the difficulty in studying white matter. Because it’s located below the surface of the brain, even the most high-tech imaging can’t easily resolve its details. But recent findings, made possible by advancements in brain imaging and autopsy examinations, are beginning to show researchers how critical white matter is.

White matter is comprised of many billions of axons, which are like long cables that carry electrical signals. Think of them as elongated tails that act as extensions of the neurons. The axons connect neurons to each other at junctions called synapses. That is where communication between neurons takes place.

Axons come together in bundles, or tracts, that course throughout the brain. Placed end to end, their combined length in a single human brain is approximately 85,000 miles. Many axons are insulated with myelin, a layer of mostly fat that speeds up electrical signaling, or communication, between neurons by up to 100 times.

This increased speed is crucial for all brain functions and is partly why Homo sapiens have unique mental capacities. While there’s no doubt our large brains are due to evolution’s addition of neurons over eons, there has been an even greater increase in white matter over evolutionary time.

This little-known fact has profound implications. The increased volume of white matter – mainly from the myelin sheaths that surround axons – enhances the efficiency of neurons in the gray matter to optimize brain function.

Imagine a nation of cities that are all functioning independently, but not linked to other cities by roads, wires, the internet or any other connections. This scenario would be analogous to the brain without white matter. Higher functions like language and memory are organized into networks in which gray matter regions are connected by white matter tracts. The more extensive and efficient those connections, the better the brain works.

White matter and Alzheimer’s

Given its essential role in the connections between brain cells, damaged white matter can disturb any aspect of cognitive or emotional function. White matter pathology is present in many brain disorders and can be severe enough to cause dementia. Damage to myelin is common in these disorders, and when the disease or injury is more severe, axons can also be damaged.

More than 30 years ago, my colleagues and I described this syndrome as white matter dementia. In this condition, the dysfunctional white matter is no longer adequately performing as a connector, meaning that the gray matter cannot act together in a seamless and synchronous manner. The brain, in essence, has been disconnected from itself.

Equally important is the possibility that white matter dysfunction plays a role in many diseases currently thought to originate in gray matter. Some of these diseases stubbornly defy understanding. For example, I suspect white matter damage may be critical in the early phases of Alzheimer’s disease and traumatic brain injury.

Alzheimer’s is the most common type of dementia in older individuals. It can impair cognitive function and rob people of their very identity. No cure or effective treatment exists. Ever since Alois Alzheimer’s 1907 observations of gray matter proteins – called amyloid and tau – neuroscientists have believed the buildup of these proteins is the central problem behind Alzheimer’s. Yet many drugs that remove these proteins do not stop the patients’ cognitive decline.

Recent findings increasingly suggest that white matter damage – preceding the accumulation of those proteins – may be the true culprit. As brains age, they often experience gradual loss of blood flow from the narrowing of vessels that convey blood from the heart. Lower blood flow heavily impacts white matter.

Remarkably, there is even evidence that inherited forms of Alzheimer’s also feature early white matter abnormalities. That means therapies aimed at maintaining blood flow to white matter may prove more effective than attempting to dislodge proteins. One simple treatment likely to help is controlling high blood pressure, as this can reduce the severity of white matter abnormalities.

From Loma Linda University Health: New discoveries to help the millions with traumatic brain injuries.

White matter and traumatic brain injury

Patients with traumatic brain injury, particularly those with moderate or severe injuries, can have lifelong disability. One of the most ominous outcomes of TBI is chronic traumatic encephalopathy, a brain disease believed to cause progressive and irreversible dementia. In TBI patients, the accumulation of tau protein in gray matter is evident.

Researchers have long recognized that white matter damage is common in people who have sustained a TBI. Observations from the brains of those with repetitive traumatic brain injuries – football players and military veterans have been frequently studied – have shown that white matter damage is prominent, and may precede the appearance of tangled proteins in the gray matter.

Among scientists, there is a burgeoning excitement over the new interest in white matter. Researchers are now beginning to acknowledge that the traditional focus on the study of gray matter has not produced the results they hoped. Learning more about the half of the brain known as white matter may help us in the years ahead to find the answers needed to alleviate the suffering of millions.

]]>
Potential Breakthrough in Treatment of Traumatic Brain Injuries nonadult
Researchers identified over 5,500 new viruses in the ocean, including a missing link in viral evolution https://stuff.co.za/2022/04/18/researchers-identified-over-5500-new-viruses/ Mon, 18 Apr 2022 14:23:22 +0000 https://stuff.co.za/?p=144834 An analysis of the genetic material in the ocean has identified thousands of previously unknown RNA viruses and doubled the number of phyla, or biological groups, of viruses thought to exist, according to a new study our team of researchers has published in the journal Science.

RNA viruses are best known for the diseases they cause in people, ranging from the common cold to COVID-19. They also infect plants and animals important to people.

These viruses carry their genetic information in RNA, rather than DNA. RNA viruses evolve at much quicker rates than DNA viruses do. While scientists have cataloged hundreds of thousands of DNA viruses in their natural ecosystems, RNA viruses have been relatively unstudied.

Line drawing of marine RNA viruses
There are more RNA viruses in the oceans than researchers previously thought. Guillermo Domínguez HuertaCC BY-ND

Unlike humans and other organisms composed of cells, however, viruses lack unique short stretches of DNA that could act as what researchers call a genetic bar code. Without this bar code, trying to distinguish different species of virus in the wild can be challenging.

To get around this limitation, we decided to identify the gene that codes for a particular protein that allows a virus to replicate its genetic material. It is the only protein that all RNA viruses share, because it plays an essential role in how they propagate themselves. Each RNA virus, however, has small differences in the gene that codes for the protein that can help distinguish one type of virus from another.

So we screened a global database of RNA sequences from plankton collected during the four-year Tara Oceans expeditions global research project. Plankton are any aquatic organisms that are too small to swim against the current. They’re a vital part of ocean food webs and are common hosts for RNA viruses. Our screening ultimately identified over 44,000 genes that code for the virus protein.

Our next challenge, then, was to determine the evolutionary connections between these genes. The more similar two genes were, the more likely viruses with those genes were closely related. Because these sequences had evolved so long ago (possibly predating the first cell), the genetic signposts indicating where new viruses may have split off from a common ancestor had been lost to time. A form of artificial intelligence called machine learning, however, allowed us to systematically organize these sequences and detect differences more objectively than if the task were done manually.

We identified a total of 5,504 new marine RNA viruses and doubled the number of known RNA virus phyla from five to 10. Mapping these new sequences geographically revealed that two of the new phyla were particularly abundant across vast oceanic regions, with regional preferences in either temperate and tropical waters (the Taraviricota, named after the Tara Oceans expeditions) or the Arctic Ocean (the Arctiviricota).

We believe that Taraviricota might be the missing link in the evolution of RNA viruses that researchers have long sought, connecting two different known branches of RNA viruses that diverged in how they replicate.

Why it matters

These new sequences help scientists better understand not only the evolutionary history of RNA viruses but also the evolution of early life on Earth.

As the COVID-19 pandemic has shown, RNA viruses can cause deadly diseases. But RNA viruses also play a vital role in ecosystems because they can infect a wide array of organisms, including microbes that influence environments and food webs at the chemical level.

Mapping out where in the world these RNA viruses live can help clarify how they affect the organisms driving many of the ecological processes that run our planet. Our study also provides improved tools that can help researchers catalog new viruses as genetic databases grow.

Viruses do more than just cause disease.

What still isn’t known

Despite identifying so many new RNA viruses, it remains challenging to pinpoint what organisms they infect. Researchers are also currently limited to mostly fragments of incomplete RNA virus genomes, partly because of their genetic complexity and technological limitations.

Our next steps would be to figure out what kinds of genes might be missing and how they changed over time. Uncovering these genes could help scientists better understand how these viruses work.

  • Guillermo Dominguez Huerta is a Science Consultant in Microbiology, The Ohio State University
  • Ahmed Zayed is a Research Scientist in Microbiology, The Ohio State University
  • James Wainaina is a Postdoctoral Research Associate in Microbiology, The Ohio State University
  • Matthew Sullivan is a Professor of Microbiology, The Ohio State University

This article first appeared on The Conversation.

]]>
The Power of Viruses, for Good | Matthew Sullivan | TEDxOhioStateUniversity nonadult
Google Labs is launching a new blockchain division – report https://stuff.co.za/2022/01/24/google-labs-is-launching-a-new-blockchain-division-report/ Mon, 24 Jan 2022 09:11:58 +0000 https://stuff.co.za/?p=140822 Google Labs is a relatively new division within the company, intended to explore the feasibility of new products. In addition to things like a set of augmented reality goggles (which they’d better call Googgles), Labs is also working on blockchain tech.

At least, that’s the word according to a report from Bloomberg. This follows recent reports that Google is interested in getting into the crypto space. This project could be the company’s way in the door.

Google Labs on the block

The new division is supposedly run by Shivakumar Venkataraman, a veteran of the company’s advertising side of things. The group will explore “blockchain and other next-gen distributed computing and data storage technologies”, according to the report. What that looks like is anyone’s guess right now, since Google’s not being official about any of it.

Google Labs was founded with the intent of investigating “high-potential, long-term projects”. The eruption of crypto and NFTs in the last few months certainly is an avenue worth exploring, at least in the eyes of Google executives.

But, as Ars Technica points out, it can be difficult to see what point there is adding an energy-intensive blockchain (which is the buzzword for a decentralised database) to… anything, really. It makes a kind of sense for cryptocurrency, but it’d be a terrible idea for, say, most other things. It’s a bulletproof database, sure, but the maintenance and power overheads make it impractical for everyday usage.

Still, if anyone can figure out how to turn the blockchain into something practical and not at all speculative, it’s probably the boffins that populate the Google Labs division. We’ll have more info on the project, when and if it becomes available.

Source: Bloomberg

]]>
Building machines that work for everyone – how diversity of test subjects is a technology blind spot, and what to do about it https://stuff.co.za/2022/01/22/building-machines-that-work-for-everyone/ Sat, 22 Jan 2022 13:23:04 +0000 https://stuff.co.za/?p=140749

People interact with machines in countless ways every day. In some cases, they actively control a device, like driving a car or using an app on a smartphone. Sometimes people passively interact with a device, like being imaged by an MRI machine. And sometimes they interact with machines without consent or even knowing about the interaction, like being scanned by a law enforcement facial recognition system.

Human-Machine Interaction (HMI) is an umbrella term that describes the ways people interact with machines. HMI is a key aspect of researching, designing and building new technologies, and also studying how people use and are affected by technologies.

Researchers, especially those traditionally trained in engineering, are increasingly taking a human-centered approach when developing systems and devices. This means striving to make technology that works as expected for the people who will use it by taking into account what’s known about the people and by testing the technology with them. But even as engineering researchers increasingly prioritize these considerations, some in the field have a blind spot: diversity.

As an interdisciplinary researcher who thinks holistically about engineering and design and an expert in dynamics and smart materials with interests in policy, we have examined the lack of inclusion in technology design, the negative consequences and possible solutions.

People at hand

Researchers and developers typically follow a design process that involves testing key functions and features before releasing products to the public. Done properly, these tests can be a key component of compassionate design. The tests can include interviews and experiments with groups of people who stand in for the public.

In academic settings, for example, the majority of study participants are students. Some researchers attempt to recruit off-campus participants, but these communities are often similar to the university population. Coffee shops and other locally owned businesses, for example, may allow flyers to be posted in their establishments. However, the clientele of these establishments is often students, faculty and academic staff.

In many industries, co-workers serve as test participants for early-stage work because it is convenient to recruit from within a company. It takes effort to bring in outside participants, and when they are used, they often reflect the majority population. Therefore, many of the people who participate in these studies have similar demographic characteristics.

Real-world harm

It is possible to use a homogenous sample of people in publishing a research paper that adds to a field’s body of knowledge. And some researchers who conduct studies this way acknowledge the limitations of homogenous study populations. However, when it comes to developing systems that rely on algorithms, such oversights can cause real-world problems. Algorithms are as only as good as the data that is used to build them.

Algorithms are often based on mathematical models that capture patterns and then inform a computer about those patterns to perform a given task. Imagine an algorithm designed to detect when colors appear on a clear surface. If the set of images used to train that algorithm consists of mostly shades of red, the algorithm might not detect when a shade of blue or yellow is present.

In practice, algorithms have failed to detect darker skin tones for Google’s skincare program and in automatic soap dispensers; accurately identify a suspect, which led to the wrongful arrest of an innocent man in Detroit; and reliably identify women of color. MIT artificial intelligence researcher Joy Buolamwini describes this as algorithmic bias and has extensively discussed and published work on these issues.

Even as the U.S. fights COVID-19, the lack of diverse training data has become evident in medical devices. Pulse oximeters, which are essential for keeping track of your health at home and to indicate when you might need hospitalization, may be less accurate for people with melanated skin. These design flaws, like those in algorithms, are not inherent to the device but can be traced back to the technology being designed and tested using populations that were not diverse enough to represent all potential users.

Being inclusive

Researchers in academia are often under pressure to publish research findings as quickly as possible. Therefore, reliance on convenience samples – that is, people who are easy to reach and get data from – is very common.

Though institutional review boards exist to ensure that study participants’ rights are protected and that researchers follow proper ethics in their work, they don’t have the responsibility to dictate to researchers who they should recruit. When researchers are pressed for time, considering different populations for study subjects can mean additional delay. Finally, some researchers may simply be unaware of how to adequately diversify their study’s subjects.

There are several ways researchers in academia and industry can increase the diversity of their study participant pools.

One is to make time to do the inconvenient and sometimes hard work of developing inclusive recruitment strategies. This can require creative thinking. One such method is to recruit diverse students who can serve as ambassadors to diverse communities. The students can gain research experience while also serving as a bridge between their communities and researchers.

Another is to allow members of the community to participate in the research and provide consent for new and unfamiliar technologies whenever possible. For example, research teams can form an advisory board composed of members from various communities. Some fields frequently include an advisory board as part of their government-funded research plans.

Another approach is to include people who know how to think through cultural implications of technologies as members of the research team. For instance, the New York City Police Department’s use of a robotic dog in Brooklyn, Queens and the Bronx sparked outrage among residents. This might have been avoided if they had engaged with experts in the social sciences or science and technology studies, or simply consulted with community leaders.

Lastly, diversity is not just about race but also age, gender identity, cultural backgrounds, educational levels, disability, English proficiency and even socioeconomic levels. Lyft is on a mission to deploy robotaxis next year, and experts are excited about the prospects of using robotaxis to transport the elderly and disabled. It is not clear whether these aspirations include those who live in less-affluent or low-income communities, or lack the family support that could help prepare people to use the service. Before dispatching a robotaxi to transport grandmothers, it’s important to take into account how a diverse range of people will experience the technology.

  • Tahira Reid is an Associate Professor of Mechanical Engineering, Purdue University
  • James Gibert is an Associate Professor of Mechanical Engineering, Purdue University
  • This article first appeared on The Conversation

]]>
How a professor learned to put compassion into engineering and design nonadult
Pandemic, war and environmental disaster push scientists to deliver quick answers – here’s what it takes to do good science under pressure https://stuff.co.za/2021/12/18/pandemic-war-and-environmental-disaster-push-scientists-to-deliver-quick-answers-heres-what-it-takes-to-do-good-science-under-pressure/ Sat, 18 Dec 2021 13:16:40 +0000 https://stuff.co.za/?p=138913

How can you know that science done quickly during a crisis is good science?

This question has taken on new relevance with the COVID-19 vaccine rollout. Researchers developed vaccines in under a year – easily breaking the previous record of four years. But that pace of development may be part of the reason about 1 in 7 unvaccinated adults in the U.S. say they will never get the COVID-19 shot. This is in spite of continued assurances from infectious disease experts that the vaccines are safe.

Scientists are called on to come up with answers under pressure whenever there is a crisis, from the Challenger space shuttle explosion to the 2020 California wildfires. As they shift from “regular” to “crisis” research, they must maintain rigorous standards despite long hours, mentally demanding tasks and persistent outside scrutiny. Thankfully, science produced under urgent conditions can be just as robust and safe as results produced under normal conditions.

We are two social scientists interested in understanding how researchers can best work on urgent problems and deliver useful findings.

In a recent study, we focused on “conflict archaeologists,” an interdisciplinary group tasked with rapid assessments of archaeological destruction in Syria during the war between 2014 and 2017. Observers feared that one particular form of destruction, artifact looting, was a major source of revenue for terrorist groups, including the Islamic State. Prominent policymakers, security officials and a worried public wanted clear answers, quickly.

John Kerry giving speech at lectern in front of Syrian artifacts.
Then-Secretary of State John Kerry praised the work of crisis archaeologists as ‘the gold standard’ in a 2014 speech about the looting of cultural artifacts. U.S. Department of StateCC BY

By any measure, conflict archaeologists succeeded. They produced findings that improved scientific knowledge. Their research led to a landmark bipartisan bill signed by President Obama. Perhaps most importantly, they raised public awareness of the problems associated with looting and smuggling archaeological materials.

Our latest research aimed to understand how work cultures played a role in these achievements – and what lessons can be applied in crisis science across disciplines.

What worked for conflict archaeologists

To investigate, we interviewed 35 conflict archaeologists and other scientists who worked with them. We also observed work in satellite labs and team meetings, and talked to people who used the data and analysis created by conflict archaeologists.

Those we interviewed worked in different physical locations and across multiple disciplines. If they met, they would do so remotely. And yet they were generally aware of what others in this research area were doing. Collaboration is central to doing good urgent science, and we found three key factors behind successfully working together during a crisis.

First, the percentage and distribution of effort matters. We call this “temporal control.” We found that full-time devotion to crisis science was not necessarily the only way to produce good work. In fact, researchers involved on a part-time basis expressed higher confidence in the quality of other collaborators’ work. We think part-timers were able to maintain a more comprehensive perspective on the collaboration overall.

And keeping a hand in their usual scientific practices seemed to help researchers stay sharp. It meant that when they turned to urgent science tasks, they could do so with fresh eyes and renewed attention to methodological precision.

Second, sharing responsibility for outcomes motivated researchers to generate rapid findings for policy and public-interest needs. We call this “responsibility control.” Effective conflict archaeologists distributed credit among collaborators. They translated their objectives and priorities for policymakers and set boundaries and expectations for understanding and using their findings. As a result, they could do their work with the knowledge that they stood with a team – producing accurate findings that could be used to combat artifact looting and trafficking was not any one individual’s sole responsibility.

[Too busy to read another daily email? Get one of The Conversation’s curated weekly newsletters.]

Finally, it was important to have limits around the extent of an individual’s personal involvement. This is “scope control,” a work environment that helped scientists set boundaries between the research and their personal lives. “It was exhausting,” one respondent told us. “I tried not to take the work home with me, but I know it was starting to affect my family life.”

Scientists who were able to control the scope of their work, and to speak openly about their challenges, were more likely to stick with the project and express confidence in the strength of the research. We hypothesize that those who are able to set borders around what and how much work they took on were in a better position to assess the strength of both their own research and that of others – and thus feel confident in it.

Creating the conditions for good crisis science

Generating high-quality, safe and reliable scientific research under pressure is not a matter of having a heroic personality or superhuman stamina. It is a matter of thoughtful, deliberate work environments and being part of professional fields that support their members even as they hold them to high standards of rigor and ethics.

To be sure, no two crises are identical. At the same time, crisis science best practices can be adapted to fit the specific circumstances of the project. Global pandemics or imminent environmental catastrophe may require short, intensive, full-time bursts of work. Some research projects are lab- or equipment-sensitive and require specific personnel. As our findings show, science conducted with a supportive infrastructure, with rigor and ethics built into the process, can produce reliable results under pressure.

Like COVID-19 researchers, conflict archaeologists worked with tight deadlines under intense scrutiny. Both groups also emphasized the need for researchers to continue to employ high ethical standards in the research process.

And understanding how scientists maintain their ethics  and rigor while working under difficult conditions is essential for maintaining the public’s trust in science.

This much is certain: Crises aren’t going away. As long as society is relying on scientists for solutions, it’s important to create conditions conducive to effective research.

]]>