Big-picture thinking: in The Bell of the World, Gregory Day listens to the music of common things

Gregory Day’s The Bell of the World is an ambitious, strange and marvellous novel.

Day has long carried out a determined and idiosyncratic writing practice of paying attention, in writing, to where he lives on Wadawurrung and Gadubanud Country. In the introduction to his excellent collection of nature writing, Words are Eagles, he considers the complexity of

writing in this polyphonous country that is now Australia […] How to describe it without betraying it. How to love it without destroying it. How to learn from it and listen to the lessons it has to impart.

In The Bell of the World, he gives this project full flight.

Review: The Bell of the World – Gregory Day (Transit Lounge)

The vehicle for the novel’s exploration of the loving and fraught entanglements of art and nature, human and non-human, in settler-colonial Australia is Sarah Hutchinson, a very odd girl living some time in the early 20th century.

As readers, we are used to developing a sense of character with reference to how someone interacts with the social world. With Sarah Hutchinson, Day attempts something quite different. Sarah is unmoored from her social context or situation for much of the first part of the novel.

To apprehend a character whose main points of reference are in the natural rather than the social world is a difficult task for novelist and reader alike, and I struggled with Sarah for much of this section. She is first flung out of an unloving family into a miserable British boarding school, then into the milieu of her loving uncle Ferny – a cosmopolitan, queer farmer and naturalist, who is residing in Rome with his battered copy of Joseph Furphy’s Such is Life.

Ferny’s posse of artist friends provide Sarah with a thrilling but destabilising education. This rackety way of learning about how to be with others leaves Sarah in a state of deep distress akin to trauma, as she is deposited in a house in the bush with a woman who has much deeper roots in that place than Sarah and her family. She emerges from this trauma by means of being outside and seeing what is there.

The novel proceeds to set out a philosophy for a way of living and writing that Day has practised throughout his career to date. It stresses the importance of attending to a particular place and its ecology with detail, care, respect and humour. Sarah learns to do this on the Ngangahook Run: land farmed by her family for generations, which seems, at first, to promise an arcadia where art, nature and love can coexist.

Read more:
Gregory Day’s essays are immersed in the natural world, but think beyond the category of ’nature writing’

A unique perspective

The Bell of the World forces the reader to slow down and attend. Sarah’s is a unique perspective. Everyday activities become odd and multifaceted when seen through her eyes. Adjectives abound: a journey in a gig becomes “a shawled and shuddery survey of the nearby but seldom-seen ground”. The effect is a little like reading Patrick White: there is so much – sometimes too much – inner consciousness sparking off and interpreting the world in novel ways that the larger sense of what is happening and who is apprehending it recedes.

Day is at his best when he grounds his runaway multidirectional novel. Concrete but resounding moments of social interaction reveal multitudes about the world in which Sarah is jangling around, and about why the novel’s philosophical argument about nature is so important.

One such moment is a conversation about the bell that some locals want to build in the church. Sarah’s uncle Ferny tells them straight: for the Aboriginal people of the region, the bell would sound not of progress but brutality. The bell is an imposition of settler-invader voice across the ecology of the place, and one that resounds with memories of colonial violence for the local First Nations people.

Gregory Day.
Pan Macmillan Australia

The bell proponent proceeds with an expectation that he will not be brooked; Ferny, with a sentence, brooks him. This sets off a cascade of events that upends Sarah’s sense of her relation to the world.

The novel expresses a profound optimism about the prospect of a settler-invader way of being with nature in Australia that does not seek to dominate but to know and care for it. This optimism is tempered by reminders of how alien this way of being seems to a culture grounded in the domination of others. The reader is immersed in Sarah’s first-person perspective for such a long stretch that we could forget that she doesn’t know a lot about the community of people in the town. Her shock, when their distrust of difference explodes into violence, is ours too.

Read more:
Kim Mahood’s Wandering with Intent redefines the Australian frontier


At one point, Sarah’s meandering internal monologue settles on the term “endotic”, which she says is “the opposite of exotic”. The word just sits there, confusing. It is not listed in the OED. It sent me down a rabbit hole that led to the musings of the French experimental writer Georges Perec about an anthropology of the “infra-ordinary”. In 1973, Perec asked:

How are we to speak of these “common things”, how to track them down rather, flush them out, wrest them out from the dross in which they remain mired, how to give them a meaning, a tongue, to let them, finally, speak of what is, of what we are. What’s needed, perhaps is finally to found our own anthropology, one that will speak about us, will look in ourselves for what for so long we’ve been pillaging from others. Not the exotic any more, but the endotic.

Gregory Day’s writing has long been interested in the “infra-ordinary”. It draws together the everyday detail of both human and non-human lives. Day’s curiosity about language is annoyingly infectious, and has me thinking about the complexity of the idea of the “endotic” in a settler-colonial context, entangled in the fraught nature of terms like invasive, introduced, native species. What he is trying to do here is imagine what the endotic might mean in this place. It is big-picture thinking.

The means by which Day carries out this experiment is primarily aural: moments of numinous experience for Sarah are tethered to “sound-made matter”. This is absolutely quotidian: creaking harnesses, swearing men, rustling grass. Sarah is trying to find a way to allow the sounds of the natural world, “the wild and ferny worlds upstream, to intervene”.

The Bell of the World is also full of tender, understated interpersonal relationships, which take place against the backdrop of receptive natural worlds and dysfunctional social ones. These relationships are observed in their dailiness, placed on an equal footing with observations of the non-human life on the Ngangahook Run.

You Yangs Regional Park, Little River.
Heidi Finn/Unsplash, CC BY-SA

Read more:
Indigenous crime fiction is rare, but in Madukka the River Serpent systemic violence and connection to Country are explored

The music of place

The closing chapters of The Bell of the World are breathtaking. They involve two contrasted set-pieces of communal listening: one in a city concert-hall (a collision of real and fictive worlds), the other as dusk falls on the Ngangahook Run. Literature points us to music as an art that enables its audiences to listen to the world around them. It encourages them to hear the oldest stories of this place, and new ones too.

In this sense, Day is echoing what First Nations writers in Australia have been saying with their work for decades now: listen to this place and the people who belong here. Pay attention! The Bell of the World models a ceding of authority that is grounded in the belief that forms of connection – underground, across air and ocean – are possible.

The notion of artistic production that emerges from the pages of The Bell of the World is not one of solitary genius, but rather of an artist in deep conversation with the plants, animals and soil around him, as well as with the works of other artists near and far. Day’s novel is clearly in conversation with Joseph Furphy, Herman Melville and Patrick White, but it also seems to me to be carrying on a more profound conversation with other big (in all senses of the term) novels of recent decades: those of Alexis Wright and Kim Scott.

The tragic-comic presentation of parochial yet homicidal townsfolk, the attempts to centre attachment to place and have the living ecology of that place fill the page, the use of disorienting narrative modes, the scenes of failed and hopeful listening – all of these things suggest to me that Day is a novelist who has been doing some good listening of his own.

There is a joyful and laudable ambition to this novel, which seeks to tell a big and true story about being in this place, while pointing to how it might be possible to make and attend to art in it. In the course of one of its many reported conversations about fiction itself, its characters speak about writing which might at once hold “much humorous companionship” and be “a lively and enigmatic seedbank” for ideas and ways of seeing. The Bell of the World finds an unruly way of being both. Läs mer…

For remote Aboriginal families, limited phone and internet services make life hard. Here’s what they told us

It’s well understood that the digital divide disproportionately affects people living in regional Australian communities. Remote Aboriginal communities in particular are among the most digitally excluded, yet there is little research looking at how these families experience digital inclusion.

Our research project, Connecting in the Gulf, shares stories directly from Aboriginal families living on Mornington Island, off the coast of Queensland in the Gulf of Carpentaria. Our full report is published online.

Working with the community, we developed a research method called “show and yarn” in which families showed us their devices and yarned about their experiences of digital inclusion.

Yarning is an Indigenous way of sharing knowledge. It was an important aspect of our work, since better outcomes are achieved when Indigenous people have a say in the design and delivery of policies, programs and services that affect them.

How do families living remotely connect?

Mornington Island residents have poor quality mobile and broadband services, and few options. The island’s only mobile network, Telstra 4G, is concentrated on the township of Gununa and is prone to congestion and outages.

The other main digital services are:

a free community wifi spot in Gununa with a 100-metre radius
a few solar-powered and satellite-enabled outstation phones placed across the island
the option to purchase NBN satellite plans from certain providers.

A cyclone-proof, solar-powered outstation phone about 20km from the township of Gununa.

The island, which has about 1,200 residents, is slated to receive a major upgrade under the Regional Connectivity Program sometime soon, but families were unaware of when this would happen.

Read more:
Digitising social services could further exclude people already on the margins

Extending a culture of sharing

The families we spoke to told us they use their mobile phones almost exclusively to make calls and access the internet.

In many cases, devices are shared between several family members, and data is shared via hotspotting when someone runs out. This is reflective of a broader culture of sharing, but can also be a source of conflict.

As one community member told us:

I hear a lot of people […] On Facebook, my mother is talking about hotspotting, they are sick of hotspotting […] I’ve got no data because we’ve got to hotspot for them […] If someone wants to use the internet to do a bank transfer, they’ll come up and ask.

Although families can purchase contract-based satellite internet connections, they spoke of poor past experiences, and a fear of being locked into contracts. They expressed that they would rather rely on prepaid credit than risk going into debt.

Interviewees also preferred to use data in their own homes despite the free community wifi spot, reflecting a family-oriented way of being.

Mornington Island residents showed us their devices and yarned with us about how they experienced digital inclusion.

Digital literacy is a challenge and opportunity

The families spoke of a gap between young people who quickly learn how to use technology, and Elders who aren’t as savvy online. We heard stories of young people pestering family members for online passwords and hotspots, and then using and/or sharing these with other people without permission.

As one person explained:

Some family members do feel like you’re taking advantage of them at times, when they feel like ‘Oh, I should share’. And it’s the same way with the banking, with the money. They’d feel like they’re obligated to share.

They also described how limited and unreliable mobile phone reception and coverage was impacting cultural activities.

For instance, phone reception stops just out of town and doesn’t cover most of the land and sea of the island. Sick and elderly people with safety concerns are scared to leave the township for activities out on Country.

One Elder suggested more young people would go out for cultural activities if outstations had better phone and internet coverage:

I think it’ll make them happy and have that pride in being out on their own land […] Whether it’s newborn turtle, or crab, fish, and them showing it off and it’ll give them that self-pride and happiness […] ‘This is what I caught’ – and they’ll show more than one family (on Facebook).

Some families had access to tablets and gaming consoles, mostly used by hotspotting prepaid mobile data.

What’s being done about the digital divide?

In January, the federal government established a First Nations Digital Inclusion Advisory Group to accelerate progress towards Closing the Gap targets. An Indigenous Digital Inclusion Plan is also being developed, with contributions from key stakeholders. Both of these developments are promising.

Boosting infrastructure in remote Aboriginal communities is not favourable for profits, given the small number of residents. Yet it’s essential for ensuring these families feel safe, that they can continue cultural practices, and access the many employment, health and education benefits of being online.

Most of all, we must listen to Indigenous voices and work with these communities to improve speed, reliability and access to services. Organisations such as InDigiMOB are working hard to achieve this.

Read more:
Digital inequality: why can I enter your building – but your website shows me the door? Läs mer…

Long COVID puts some people at higher risk of heart disease — they need better long-term monitoring

Lasting damage to the heart and brain is an aspect of long COVID that should receive much more attention than it has so far. We have sufficient evidence now to call for ongoing monitoring of individuals across the population.

At least one in ten people – and probably more – develop long COVID after the acute infection and many experience persistent debilitating symptoms, including fatigue, a disturbed sense of smell or taste, shortness of breath, brain fog, anxiety and depression.

But a much smaller group of people develops more life-threatening disorders, particularly cardiovascular disease, which includes heart attacks and strokes. The scale of this problem is now clearer.

We know that SARS-CoV-2, the virus that causes COVID, can directly infect heart tissue and cause microscopic blood clots, which can sometimes culminate in deep vein thrombosis, pulmonary embolism, myocardial infarction and stroke.

Several studies now show an elevated risk of cardiovascular outcomes following COVID and there may also be hidden pathology that will only emerge as people age.

We need to monitor people with a history of COVID, at least with regular check-ups by family practitioners. Even better, we should establish a registry to facilitate research and healthcare for people at risk.

Cardiovascular disease after the acute infection has passed

The first study to present large-scale data on long-term (rather than acute) cardiovascular complications from COVID was based on the national healthcare databases of the US Department of Veterans Affairs. It established a cohort of almost 154,000 individuals with COVID and two comparison groups, each of more that five million.

The research showed that, after 30 days, people with COVID were at increased risk of stroke (1.5-fold higher risk) and transient ischaemic attacks of the brain (1.5-fold), abnormal heart rhythms (1.7-fold), ischaemic heart disease (1.7-fold), pericarditis (inflammation of the outermost layer of the heart; 1.9-fold), myocarditis (inflammation of the heart muscle; 5.4-fold), heart failure (1.7-fold) and clotting disorders (2.4-fold).

These higher risks increased with the intensity of required care during the acute COVID phase but were evident even among those not hospitalised.

Read more:
What the research shows about risks of myocarditis from COVID vaccines versus risks of heart damage from COVID – two pediatric cardiologists explain how to parse the data

Another study focused on a younger group (mean age of 43 years), using data held by US healthcare organisations, with more than four million individuals who had completed a COVID test. Like the veterans study, follow-up began after 30 days and continued for 12 months.

The researchers compared rates of cardiovascular outcomes for groups with and without COVID infection. Those with COVID had a higher risk of stroke (1.6-fold) and transient ischaemic attacks (1.5-fold), atrial fibrillation (2.4-fold), pericarditis (1.6-fold), myocarditis (4.4-fold), acute coronary artery disease (2.1-fold), heart attack (2-fold), heart failure (2.3-fold) and clotting disorders such as pulmonary embolism (2.6-fold).

The risk of a major adverse cardiovascular event was 1.9-fold higher. Risk for any cardiovascular outcome was about 1.6-fold higher, as was the risk of dying. These higher risks were seen among both men and women. The risk of dying was greater among people aged 65 and older.

Higher healthcare needs and excess deaths

The most recent study analysed insurance claims from every US state to explore one-year outcomes in a group of more than 13,000 people with long COVID, compared with a matched group of more than 26,000 individuals without a history of COVID.

During follow-up, the long-COVID cohort sought higher healthcare for a wide range of adverse outcomes, including cardiac arrhythmias (2.4-fold higher), pulmonary embolism (3.6-fold), ischaemic stroke (2.2-fold), coronary artery disease (1.8-fold), heart failure (2-fold) as well as emphysema and asthma.

The people with long COVID were more than twice as likely to die than the COVID-free controls.

Read more:
Supporting a child with long COVID – tips from parents of children living with the condition

It is notable these studies produced similar patterns of risk for cardiovascular disease. The risks of both myocarditis and clotting disorders were particularly elevated.

Another study took a different approach, estimating excess deaths due to cardiovascular disease across multiple pandemic waves in the US. From March 2020 to March 2022, there were more than 90,000 excess deaths from cardiovascular disease – 4.9% more than expected.

What might we be missing?

These are all observational studies. There is always the concern that some unmeasured variables may explain the findings. However, the close consistency of these studies makes this less likely. The following two reports suggest that, if anything, these studies may be underestimating the size of the problem.

In a community-based UK prospective study of long COVID among low-risk individuals (low prevalence of comorbidities and only 19% hospitalised with acute COVID), 26% showed mild heart impairment four months after their initial diagnosis.

Cardiovascular MRI scans of 534 individuals with long COVID revealed one in five showed some cardiac impairment at six months, which persisted in more than half the group at 12 months. These two studies suggest sub-clinical cardiac disorder may be a much more common manifestation of long COVID than expected.

Does vaccination help?

One way to establish whether COVID infection is responsible for the higher rates of cardiovascular disease is to ask whether vaccination reduces risk (as it does with the more common symptoms of long COVID).

Two studies suggest vaccination approximately halves the risk of severe cardiovascular outcomes. A Korean study shows fully vaccinated people had less than half the risk of both heart attack and stroke compared to an unvaccinated group.

Read more:
Long COVID affects 1 in 5 people following infection. Vaccination, masks and better indoor air are our best protections

A similar study in the US – with groups categorised as fully vaccinated, partially vaccinated, and unvaccinated – observed most cardiovascular events in the unvaccinated group (12,733 of 14,000). Affected individuals had markedly more co-existing conditions, including previous major cardiovascular events, type 2 diabetes, high blood lipids, ischaemic heart disease, liver disease and obesity.

However, even taking these pre-existing higher risks into account, those who were fully vaccinated had a 41% lower risk. Partial vaccination conferred a 24% lower risk of severe cardiovascular disease.

The need to monitor people with a history of COVID, especially long COVID, is clear. Läs mer…

Federal budget 2023: Long-term investments are needed to fix Canada’s infrastructure gap

The federal government’s 2023 budget unveiled investments in infrastructure, with a narrative highlighting resilient and sustainable communities, and pointing to Ottawa’s progress and investments to date.

The budget is focused on building communities through infrastructure, housing, transit and connectivity. Much of this emphasizes investments made since 2015, including $33.5 billion to the Investing in Canada Infrastructure Program, and $35 billion to the Canada Infrastructure Bank.

Funding critical infrastructure

The budget’s investments include funding advanced research in infrastructure innovation, and continuing to invest in Canada’s Infrastructure Bank and Infrastructure Program.

The Bank will play a leading role in electrification as part of the government’s push for clean power. This will likely position the Bank as the government’s primary financing tool for major electrification projects.

Deputy Prime Minister and Minister of Finance Chrystia Freeland delivers the federal budget in the House of Commons on Parliament Hill in Ottawa on March 28, 2023.

Budget 2023 also commits to engaging with provinces and territories to revise procurement policies to ensure they benefit Canadian workers and build resilient supply chains. There are also investments in port, air and other critical transportation infrastructure.

We know that Canada’s infrastructure is at risk. Federal infrastructure investments can help to take financial pressure off municipalities that are faced with massive funding shortfalls in addressing their infrastructure concerns. With population expected to grow, infrastructure will continue to be stressed and will struggle to keep up without proper funding.

Budget 2023 provides no new major funds for what is considered essential community infrastructure: roads, water, wastewater and other infrastructure assets. Unlike electrification and connectivity — many aspects of Canada’s infrastructure gap remain relegated to low-priority status.

More investment is needed to address critical infrastructure gaps, but these are investments that Canadians may not be ready to make. Previous budgets have focused on short-term infrastructure investments as an economic stimulus, which doesn’t support the long-term view infrastructure requires.

Canada’s infrastructure gap

A 2013 report on Canada’s infrastructure gap highlighted the chronic issues in infrastructure investments, including the notion that infrastructure remains a political hot potato.

Between the late 1950s and mid 2000s, public investment in infrastructure decreased from around three per cent of GDP to 1.5 per cent, though it began to rise again in 2010. During this same period, there was a significant shift in terms of who carries the burden of investing in infrastructure from the federal government, with a large revenue base, to municipalities who have the smallest revenue base.

Canada’s infrastructure deficit is at minimum estimated at $150 billion.

Local governments bear much of the additional infrastructure costs related to extreme events, climate change mitigation and adaptation. In 2013, floods caused around $3 billion in damage in southern Alberta and Toronto. The cost of rebuilding in British Columbia after 2021 flooding has reached nearly $9 billion. The annual cost of natural disasters in Canada could be up to $139 billion by 2050.

A ‘Bridge is out’ sign is seen following flood damage in Merritt, B.C. in December 2021. Extreme weather events like floods and wildfires are placing greater pressure on public infrastructure.

Internationally, governments are struggling with the same issues. From Biden’s $1 trillion infrastructure bill to China’s infrastructure investments, infrastructure demand remains a constant across international communities from large to small.

But the question remains, where and how should we invest? And more importantly, what do you do when too few people seem to pay attention? North Americans have an imbalanced relationship with infrastructure, and our understanding of priority and need. We care less about infrastructure investments when we can’t see the direct benefits.

What we see in the 2023 budget is a careful dance. The government needs to show it’s making investments in infrastructure without further stretching public finances or making the tough choices that our dilapidated infrastructure requires.

No political party is protected from the curse of the infrastructure deficit — and there are no winners in the game of infrastructure funding. What it does require, is that we all collectively take responsibility. This means dealing with public spending deficits, even if that means paying more taxes. And strengthening our relationship with infrastructure and our collective understanding of the role that it plays in our daily lives.

Governments will need to take on additional costs, and individuals will need to learn to accept that improving our communities costs money. We all need to learn that the connection between infrastructure and our well-being is closer than we think. Läs mer…

Nashville attack renews calls for assault weapons ban – data shows there were fewer mass shooting deaths during an earlier 10-year prohibition

The shooting deaths of three children and three adults inside a Nashville school has put further pressure on Congress to look at imposing a ban on so-called assault weapons. Such a prohibition would be designed cover the types of guns that the suspect legally purchased and used during the March 27, 2023, attack.

Speaking after the incident, President Joe Biden issued his latest plea to lawmakers to act. “Why in God’s name do we allow these weapons of war on our streets and at our schools?” he asked.

A prohibition has been in place before. As Biden has previously noted , bipartisan support in Congress helped push through a federal assault weapons ban in 1994 as part of the Violent Crime Control and Law Enforcement Act.

That ban was limited – it covered only certain categories of semiautomatic weapons such as AR-15s and applied to a ban on sales only after the act was signed into law, allowing people to keep hold of weapons purchased before that date. And it also had in it a so-called “sunset provision” that allowed the ban to expire in 2004.

Nonetheless, the 10-year life span of that ban – with a clear beginning and end date – gives researchers the opportunity to compare what happened with mass shooting deaths before, during and after the prohibition was in place. Our group of injury epidemiologists and trauma surgeons did just that. In 2019, we published a population-based study analyzing the data in a bid to evaluate the effect that the federal ban on assault weapons had on mass shootings, defined by the FBI as a shooting with four or more fatalities, not including the shooter. Here’s what the data shows:

Before the 1994 ban:

From 1981 – the earliest year in our analysis – to the rollout of the assault weapons ban in 1994, the proportion of deaths in mass shootings in which an assault rifle was used was lower than it is today.

Yet in this earlier period, mass shooting deaths were steadily rising. Indeed, high-profile mass shootings involving assault rifles – such as the killing of five children in Stockton, California, in 1989 and a 1993 San Francisco office attack that left eight victims dead – provided the impetus behind a push for a prohibition on some types of gun.

During the 1994-2004 ban:

In the years after the assault weapons ban went into effect, the number of deaths from mass shootings fell, and the increase in the annual number of incidents slowed down. Even including 1999’s Columbine High School massacre – the deadliest mass shooting during the period of the ban – the 1994-2004 period saw lower average annual rates of both mass shootings and deaths resulting from such incidents than before the ban’s inception.

From 2004 onward:

The data shows an almost immediate – and steep – rise in mass shooting deaths in the years after the assault weapons ban expired in 2004.

Breaking the data into absolute numbers, from 2004 to 2017 – the last year of our analysis – the average number of yearly deaths attributed to mass shootings was 25, compared with 5.3 during the 10-year tenure of the ban and 7.2 in the years leading up to the prohibition on assault weapons.

Saving hundreds of lives

We calculated that the risk of a person in the U.S. dying in a mass shooting was 70% lower during the period in which the assault weapons ban was active. The proportion of overall gun homicides resulting from mass shootings was also down, with nine fewer mass-shooting-related fatalities per 10,000 shooting deaths.

Taking population trends into account, a model we created based on this data suggests that had the federal assault weapons ban been in place throughout the whole period of our study – that is, from 1981 through 2017 – it may have prevented 314 of the 448 mass shooting deaths that occurred during the years in which there was no ban.

And this almost certainly underestimates the total number of lives that could be saved. For our study, we chose only to include mass shooting incidents that were reported and agreed upon by all three of our selected data sources: the Los Angeles Times, Stanford University and Mother Jones magazine.

Furthermore, for uniformity, we also chose to use the strict federal definition of an assault weapon – which may not include the entire spectrum of what many people may now consider to be assault weapons.

Cause or correlation?

It is also important to note that our analysis cannot definitively say that the assault weapons ban of 1994 caused a decrease in mass shootings, nor that its expiration in 2004 resulted in the growth of deadly incidents in the years since.

Many additional factors may contribute to the shifting frequency of these shootings, such as changes in domestic violence rates, political extremism, psychiatric illness, firearm availability and a surge in sales, and the recent rise in hate groups.

Nonetheless, according to our study, President Biden’s claim that the rate of mass shootings during the period of the assault weapons ban “went down” only for it to rise again after the law was allowed to expire in 2004 holds true.

As the U.S. looks toward a solution to the country’s epidemic of mass shootings, it is difficult to say conclusively that reinstating the assault weapons ban would have a profound impact, especially given the growth in sales in the 18 years in which Americans have been allowed to purchase and stockpile such weapons. But given that many of the high-profile mass shooters in recent years purchased their weapons less than one year before committing their acts, the evidence suggests that it might.

Editor’s note: This is an updated version of an article originally published on June 8, 2022. Läs mer…

First Target, then Nordstrom — why do big retailers keep failing in Canada?

Since Nordstrom announced it was pulling out of Canada, there has been a maelstrom of articles about the pervasive failure and exodus of U.S. retailers from Canada. Aside from Nordstrom, American retailers that have failed in Canada include Bed Bath and Beyond and Target.

But the list of Canadian retail failures is almost as long as the U.S. list, and includes well-known names such as Zellers, Eaton’s and all of the Dylex brands. Meanwhile, some of the most successful retailers in Canada — including Home Depot, Walmart and Costco — are also American.

So the perspective that U.S. retailers are somehow more prone to failure than Canadian retail chains is unconvincing. However, something is going on that makes retailing in Canada more challenging than ever.

Online shift isn’t the reason

One of the key explanations given by pundits for why retail is so challenging in Canada is the switch to online buying. For many, the advantages of online shopping are multifaceted, including convenience, 24/7 availability and a wider selection of products compared to traditional retailers.

According to Statistics Canada, the proportion of retail e-commerce sales rose to 6.2 per cent in 2022 from 3.9 per cent in 2019.

In the U.S., 14.8 per cent of all retails sales were online in 2022. This was a slight decrease from the height of the pandemic, when close to 16 per cent of sales were online. In other words, online retailing has more than doubled in less than six years. Amazon is reported to have a 45 per cent share of online retail sales.

But this doesn’t explain why the Canadian marketplace is more affected by retail failures than other countries. The shift to online buying has affected all Organisation for Economic Co-operation and Development countries, but the level of retail failures and closures in Canada seems disproportionately high.

Zellers’ parent company, the Hudson’s Bay Company, closed almost all the original Zellers stores in 2013, with the final two stores closing in 2020.

The Canadian marketplace

There are four factors that make the Canadian retailing environment difficult for newcomers to break into.

1. Economies of scale. Canada has a population of 39 million spread across a very large geographic area. Compared to other G-7 countries, retailers don’t benefit from economies of scale in Canada unless they operate across the entire country. A regional operator in the northeast U.S., for example, has a potential market of more than 125 million, while a regional operator in Canada is lucky to have a potential market of 15 million.

When it comes to ordering sufficient quantities of products from overseas manufacturers, Canadian retailers are at a massive disadvantage. With online retailers like Amazon that operate across borders, this disadvantage is amplified. The economies of scale also hamper Canadian firms due to smaller sales volumes. Smaller volumes mean that fixed business costs, like salaries and marketing, are higher on a per product basis.

2. Supply chain challenges. Because Canada is not a densely populated country, the distances that products need to travel to make it to consumers are high, compared to other countries. Regional distribution centres can overcome this challenge, but they could increase the cost and complexity of already-strained supply chains.

3. Tough regulatory environment. Compared to the U.S., Canada has a more regulated environment for retailers. Whether it is employment laws, building permits, environmental regulations or health and safety rules, Canada is more demanding. This creates added cost for retailers in terms of startup, operations and compliance. Canadian compliance laws are tough for good reason — corporations play a key role in the standard of living and are therefore held to a high standard.

4. Canadian identity. While Canadians buy and consume products imported from everywhere, there is a segment of the population that is concerned about Canadian identity. For this group, Canadianess is important. They see Canada as a small country living in the shadow of the U.S., trying to make its mark in the world.

The segment is small, but it still reduces the potential market for foreign retailers that need to compete against an existing retailer with a strong Canadian identity. Because Canada is the smallest of the G-7 countries, this issue is more important in Canada. There are Canadians who don’t want to see more dollars than necessary going south of the border.

Strategies for success

So how can new retailers maximize their chances of success? There are three strategies that, if followed carefully, increase the likelihood of newcomers’ success in Canada.

1. Go big or go home. Because the Canadian marketplace is substantially smaller in size, a newcomer needs to commit to succeed. The perceived advantage of opening a small number of stores is that is reduces risk, but this strategy poses a problem: the economies needed for a company to succeed never materialize.

Going big means more than just having a bricks and mortar footprint across the country. It also means building a strong online presence that allows synergy between online ordering and in-person browsing to flourish. Many consumers who order from the Walmart or Canadian Tire online stores inspect the goods at the physical stores beforehand.

Target expanded into Canada in March 2013, eventually opening 133 stores across the country. But by April 2015, the venture failed and Target shut down all its Canadian stores.

2. Be true to yourself. If a retailer has enjoyed success in another market, it needs to figure out how to replicate that in Canada. This is especially true for big U.S. retailers. Ninety per cent of the Canadian population lives within 160 km of the U.S. border and many Canadians cross the border to go shopping.

Long before Target entered Canada, Canadians knew it to be a more stylish alternative to Walmart with competitive, if not slightly higher, prices. This is what Canadians expected when they went to Target in Canada for the first time. But it was not what they found. Instead, they found empty shelves and prices that were inconsistent with those at U.S. Targets and non-competitive with Canadian Walmarts.

3. Understand the market. The biggest challenge for a new Canadian retailer is recognizing regional differences despite the smaller population. One-size-fits-all is something retailers need to leave behind. Beyond the obvious challenge of developing a specific Québecois retail strategy, the differences across Canada in terms of climate, leisure activities and culture are significant. There are cities in Canada where Chinese New Year and Diwali are as important as Thanksgiving and Christmas, for example.

In Canada, most retailing initiatives that fail can be attributed to a lack of recognition of the importance of critical mass to compete effectively. By incorporating these three strategies into both launch and operating plans, newcomers may find Canada to be more welcoming than recent history suggests. Läs mer…

Federal budget 2023: Grocery rebate is the right direction on food insecurity, but there’s a long road ahead

With grocery prices at an all-time high, Canadians were paying close attention to the 2023 federal budget and what relief it might bring.

The promise to address affordability materialized as a new “grocery rebate” that will provide money directly to low-income Canadians. This policy takes the right kind of approach to supporting food-insecure Canadians, but it doesn’t go far enough to address the large and longstanding problem of household food insecurity in this country.

Using population survey data collected by Statistics Canada, we estimated that 5.8 million Canadians, including 1.4 million children, lived in households that struggled to afford the food they need due to inadequate finances in 2021. That means one in six households in Canada’s 10 provinces were food insecure. Similarly high rates were charted in 2019 and 2020.

With the recent unprecedented inflation, these statistics can only have worsened.

Direct cash transfer

Although this new measure is being called a grocery rebate, it’s a direct cash transfer to low-income Canadians, delivered through the GST credit system. From what we can see from the budget, the grocery rebate will be almost identical to the one-time GST top-up credit provided in late 2022 as part of the federal government’s Affordability Plan.

As a targeted income support, the grocery rebate is a step in the right direction for addressing food insecurity.

The new grocery rebate is a step in the right direction, but the federal government needs to go further.

It recognizes that income inadequacy is at the root of the challenges faced by Canadians who are unable to afford the food they need. The only policy interventions shown to move the needle on food insecurity have been those that improve the financial situations of low-income households.

The grocery rebate is a major advance over the federal government’s approach of providing hundreds of millions of dollars to food charities at the start of the COVID-19 pandemic in response to concerns of rising food insecurity.

The government is now taking some more responsibility for protecting households from food insecurity, enabling them to buy the food they need instead of expecting them to turn to charities for help.


There are several promising policy aspects of the grocery rebate. It is income-tested, which means it will only go to low-income households (in this case, single adults earning less than $45,000 and couples earning less than $65,000). These are the households most at risk of food insecurity and most heavily impacted by rising costs of living.

It does not require someone to hold a job or earn a certain amount to be eligible. It can reach people who have had difficulty finding work or unable to work, like those on social assistance who are most likely to be severely food-insecure because their incomes are so low.

Since the grocery rebate is delivered through the GST credit mechanism, it should also be non-taxable and not impact the receipt of other benefits. And despite its name, the rebate will be provided automatically, without additional barriers like separate applications or eligibility criteria.

Unfortunately, like last year’s GST top-up, the grocery rebate is far from enough. It is only a small, one-time cash transfer in a budget absent of any enhancements to other income supports.

Eligible singles will receive up to $234 as a single one-time payment, and a couple with two children will receive up to $467. While surely helpful to low-income households, this amount comes nowhere close to making up for the drastic rise in costs of living.

The one-time nature of the grocery rebate presents the struggle of low-income households to afford food as a temporary emergency that will pass, and isn’t tied to other policy decisions. But a one-time benefit cannot address the chronic inadequacy of public income supports and wages underlying the persistence of food insecurity in this country.

People shop in a grocery store in Montréal in November 2022. After enjoying 30 years of a relatively low and stable inflation rate, Canadians are grappling with the highest levels of inflation seen in nearly four decades.

Expand the Canada Child Benefit

Moving forward, the federal government needs to go further with the direction of this rebate by providing more adequate support for low-income Canadians.

Our recent study shows that a more generous Canada Child Benefit for low-income families would reduce their probability of food insecurity.

Prime Minister Justin Trudeau participates in a virtual conversation in Ottawa in May 2021 on the Canada Child Benefit.

This could even be achieved in a way that is cost-neutral by redistributing funds currently used to provide the Canada Child benefit to very high-income families.

Other federal policy levers include the Canada Workers Benefit and Employment Insurance (EI). The 2021 federal budget started a process for reforming EI that is still ongoing. New research highlights the potential to protect Canadians from food insecurity by increasing the amount of money provided by EI and making it accessible to more workers.

With health care another budget priority, it’s important to recognize that food insecurity is deeply connected to health-care use. Food-insecure adults are more likely to require various health-care services, including acute care, mental health care and emergency services, and incur greater health-care costs. Reducing food insecurity could offset considerable public health-care expenditures.

At this critical juncture of high food insecurity and unprecedented inflation, the 2023 federal budget is heading in the right direction, but falls short of taking the necessary steps to address food insecurity. Läs mer…

How branding can show people’s love for a place and also help to highlight local challenges

The I ♥ NY logo was launched in the 1970s when New York City was at its grittiest and most dangerous. Since then graphic designer Milton Glaser’s creation has been emblazoned on every kind of souvenir imaginable, not to mention inspiring movies, clothing, graffiti and even food.

More than 50 years later, New York has just updated its iconic branding – not for the first time – to say We ♥ NY as part of an attempt to revitalise the city after COVID lockdowns.

And while lots of people hate the rebrand, it still reflects the intent behind the much-loved original logo. These days it’s hard to argue that the brand hasn’t done the job of communicating exactly how New Yorkers – and many tourists – feel about the city.

Indeed, unlike the kind of brand advertising created for a product, this campaign was never designed to sell anything, but to communicate a feeling about the city by its people. And if people feel more positive about a city or an area, they will be more ready to help improve it.

Such campaigns are developed as part of a branding process used to whip up feelings about a place. These so-called “place branding” efforts can gather communities around whichever ideas matter most to these people, whether they are social, economic, or even environmental.

Developing a place brand can be complex and challenging, but also immensely rewarding. It can involve government, companies and society in general. It can include events, ideas and investments focused on winning over visitors, residents and investors – all to help social community and local businesses thrive and grow.

People Make Glasgow is an example of a flexible place brand that can be associated with a wide range of assets and activities. But this kind of brand doesn’t have to convey a straight, positive message about an area, town or city, it can also be connected to specific challenges.

Crowds pass underneath a sign emblazoned with Glasgow’s place branding logo on Buchanan Street in the city.
Kilmer Media/Shutterstock

More recent attempts at branding a location have aimed to galvanise communities to work together to create and communicate a shared identity – not just right now, but also in the future. In many cases, this includes highlighting challenges such as the impact of climate change.

Using branding to inspire support

As climate change increasingly affects areas in different ways, communities are starting to use place branding to help address specific environmental challenges. This makes sense since people tend to be attached to where they live, and communities often seek ways to act locally to work against or mitigate the effects of climate change.

Inspired by Iceland is a good example of this. The country launched a “premium tap water” brand in 2019 to encourage residents and visitors to go plastic-free while in Iceland by drinking its tap water.

Integrating climate-related issues into the branding process communicates to everyone – including tourists, investors, residents and public and private sector bodies – that climate action is a priority. It shows that it’s integral to local identity and discourse, as residents seek to protect their home’s environmental features.

Place branding may also affect local or even national government policy making. This is what happened in Palau, a Micronesian island in the western Pacific. In 2017 its government started to require all visitors to sign a pledge to be “ecologically and culturally responsible” before they could set foot in the country.

The Faroe Islands in the north Atlantic, took a slightly different approach in 2019 by declaring itself “closed for maintenance, open for voluntourism”. This initiative was used by islanders and local businesses to promote community cohesion. It also offers tourists a unique chance to connect with the core values of the country.

Adapting to new challenges

Most places are limited in their ability to adapt to challenges such as climate change. Unlike residents, local businesses and tourists, a city or country can’t relocate itself. Instead, an area must adapt, which can become a multifaceted and politically challenging process simply due to the range of people and organisations involved. Diverse community needs and imbalances of power held by public and private sector organisations only add to the challenge.

In reality, even though place branding is very much about community cohesion, diverse communities are not necessarily equally involved in the decision-making process. It’s important to recognise that initiatives – whether national, regional, or local – can only go so far, and policy-led change is also required, especially when dealing with challenges such as environmental degradation.

Place branding has become a useful tool to accompany such policies. People can also become quite attached to these brands. Indeed, rather than any reluctance to help the city face new challenges, the opposition to the We ♥ NY update shows the strength of feeling for the city and perhaps even for its brand. Läs mer…

Sex education review: controversial proposals risk failing young people

Prime Minister Rishi Sunak has brought forward a planned review into sex education in English schools.

This is in response to concerns raised by Conservative MP Miriam Cates over sex education that is “age-inappropriate, extreme, sexualising and inaccurate” being taught in schools. Nearly 50 Conservative MPs signed a letter coordinated by Cates which called for a independent inquiry into the supposed delivery of inappropriate sex education content to young children.

A report on sex education in schools, commissioned by Cates, published in March 2023 and presented to Sunak, criticises numerous elements of current sex education teaching. The report was carried out by the New Social Covenant Unit, an organisation set up by Cates and another MP, Danny Kruger.

If followed, the recommendations in the report have the potential to roll back decades of significant progress in supporting children and young people and have adverse affects on their health and wellbeing.

Sex education is vitally important. It helps young people navigate situations which may be full of anxiety and uncertainty. It helps them recognise unwanted or inappropriate behaviour. Research has found that a lack of sex education is a risk factor for sexual abuse.

‘Age appropriate’

The report calls for “age appropriate” sex education, stating that “in some cases the only ‘age appropriate’ approach might be to not introduce the subject at all”. It holds that there should be clear parameters on what should be taught. However, government guidance already provides this, outlining topics and at what age ranges they should be taught.

There is also a danger in being too prescriptive. Different children and young people have different needs from their sex education lessons. For instance, if violent porn is being shared among a class of young people teachers have to act fast to help them deal with what they have seen, with consideration to consent and respect.

Fundamentally, sex education has to be realistic. It is highly likely that young people will see pornographic content and receive unwanted images at some point before they get to 18. This has to be addressed through safe sex education lessons.

Teaching children about relationships and sex often throws up moral issues. Sexual desire can be seen as an affront on morality, making sex something that has to be controlled and monitored. This, coupled with current perceptions of children, makes discussion of sex in schools difficult. This moral panic also makes sex education a valuable political pawn.

Discussing sex with children and young people is often, incorrectly, framed as encouraging sexual behaviour. But a plethora of research demonstrates that the more information young people have about sex the more likely they are to delay it.

The report commissioned by Cates questions a “sex positive” approach to sex education that seeks to reduce shame. It states that “‘sex positivity’ does not seem to tolerate other ethical systems of thought that favour restrictions, boundaries, see a purpose in shame, or which have moral codes that might exclude certain practices or oppose ‘sex positivism’ itself”.

But shame is incredibly damaging. Shame silences young people. It means that they do not seek information and support when they need it, putting themselves at risk. What’s more, linking sex to morality can increase feelings of shame.

LGBTQ+ students

The report also presents a worrying approach to sex education for LGBTQ+ and gender diverse children and young people. It states that LGBTQ+ matters take a “disproportionately dominant place in the curriculum”. But research suggests that sex education in the UK is predominantly delivered from a standpoint focused on heterosexual sex and relationships.

This means that LGBTQ+ children and young people lack relevant and affirming sex education. This lack has been linked to higher rates of STIs, HIV and even teen pregnancy among LGBTQ+ young people.

The report also questions teaching on issues relating to gender identity. It states that “…of particular concern is that a political and ideological bias in RSE teaching is promoting trans identification to school pupils”.

Research shows that transgender and gender diverse children face discrimination at school. But there is evidence that being in environments that are positive about gender and personal identity lead to improved health and wellbeing outcomes among this group.

Calls in the report for an emphasis on teaching the biological mechanics of sex – such as procreation and contraception – almost exclusively focuses upon sexual relations of cisgender and heterosexual young people. But decades of evidence shows that comprehensive and inclusive sex education results in positive mental health outcomes and safer school climates for all young people.

Also, a greater proportion of LGBTQ+ inclusive sex education leads to lower rates of poor mental health and reduced school-based victimisation for LGBTQ+ students.

We need to listen to young people

A crucial aspect missing from the report is the input of children and young people themselves. Young people want to engage in designing sex education that benefits them. They are often being taught what they already know.

Quite simply, we do not know what normal sexual behaviour is in children and young people and we need them to guide us in what they need and when. Good sex education should focus on their needs, rather than it being imposed upon them. It should be designed with young people and underpinned by research.

We must provide young people with sex education that keeps them safe. The recommendations in the report commissioned by Cates risk undoing over 20 years of valuable progress in helping young people make informed decisions about their own sexuality and relationships without shame. Läs mer…

How the private rental sector created a homelessness crisis in Ireland and England

There is a homelessness crisis in Ireland. More than 11,000 adults and nearly 3,500 children were living in emergency homeless accommodation at the end of 2022 – a 300% increase since 2014. And these figures do not even capture the full extent of hidden homelessness because thousands of people in Ireland are living in unaffordable, unsafe, insecure housing.

In recent years Ireland’s private rental sector has been used to plug the gap left by reduced social housing availability. Combined with booming demand from private renters, properties are now in very short supply.

England is experiencing similar shortages, with recent figures showing the number of homes available to rent across the UK has fallen by a third over the past 18 months, while average rents are now £2,200 a year higher than before COVID.

But both Ireland and England are suffering from much more than too few houses or too much demand. People are struggling to afford decent homes after decades of policies designed to boost homeownership have turned property into piggy banks rather than homes.

In Ireland, many of the people now classed as homeless were evicted from the private rented sector and have been unable to find affordable housing again. Surging demand has sent rents rocketing, leaving few dwellings available. And government plans to lift a COVID-era ban on evictions from April 1 2023 may only make matters worse.

Access to affordable rented housing has become a major social and indeed political issue in Ireland, with thousands taking to the streets in recent months and years to call on the Irish government to end the crisis.

Demand has been driven by population increases, new household formations, inward migration (40% of private rental sector tenants are non-Irish), holiday lets and the proliferation of high income industries such as tech.

The average new rent is now almost €1,500 (£1,300) nationally, and over €2,000 in Dublin – a more than 100% increase over a decade. Concern over rising rents led to the creation of rent pressure zones by the government in 2016, in which rent increases are capped at 2% annually. Still, some 40% of private tenants in Ireland pay more than 30% of their net income in rent.

Younger people are most likely to be affected. Half of all 25-34 year olds in Ireland live in private rentals, alongside one-quarter of those aged 35-44. Many members of this so-called “generation rent” are growing increasingly frustrated as they realise they will never own a home.

Decent homes are rising out of reach of
Simon Bratt/Shutterstock

A new squeeze on rental supply

Small landlords (owners of one or two units) are also leaving the sector, exacerbating the issue. Figures from Irish regulator, the Residential Tenancies Board, show that in 2021 some 60% of eviction notices related to a landlord selling the dwelling, and 23% related to a family member moving in.

Many of these landlords have crystallised capital gains at the peak of recent house prices rises, or are retiring. Some may have been reluctant or “accidental” landlords in the first place – indeed half of all rental properties were acquired with an owner-occupier mortgage.

Being a landlord is also an increasingly complicated business: there have been over 100 legislative changes since 2009, leading to claims of “regulatory overload” by the Irish Property Owners Association. Of course, such changes also make it difficult for tenants to understand and enforce their own rights.

Media reports often reduce the debate to questions of supply and demand. After all, there were only 1,100 dwellings available for rent on February 1 2023, compared to an average of 8,500 on that date each year between 2006 and 2021.

But this supply squeeze doesn’t explain why one-third of renters require state assistance with rent, totalling €1 billion per year, or why the private rental sector has grown from 8% of overall tenure in 1991 to 21% in 2016. This is expected to be even higher in the 2022 census returns.

These developments have occurred because Ireland’s private rental sector was boosted by tax relief schemes right up until 2008. This resulted in a flood of buy-to-let mortgages in the early 2000s. Government housing policy has also enlisted the private rental sector to fill a gap in social housing for over two decades.

A man holds up a sign about homelessness at a cost of living crisis protest in Dublin in September 2022.

England’s homelessness crisis

England is also in the grip of an acute housing affordability and homelessness crisis. Homelessness has increased by 75% since 2020 and rough sleeping has risen by 170%. Recorded homelessness in 2023 stands at 270,000 people, although, as in Ireland, this figure excludes the hidden homeless.

And again, just like in Ireland, this homelessness crisis is inextricably linked to the revival of private rentals, with eviction from this type of housing being the main cause of homelessness.

Over the past two decades, private rentals have more than doubled to accommodate over 20% of households in 2021, in line with a corresponding decline in social housing. Rent increases have exceeded the rate of inflation over the past two decades. As a result, private renters spend on average over 33% of their household income on rent – although this figure rises to 38% once £23 billion in annual Housing Benefit payments is excluded.

The roots of England’s affordability and homelessness crisis are similar to Ireland’s. It can be traced to shifts in housing policies since the 1980s involving the privatisation of social housing. The introduction of assured shorthold tenancies (AST) enabled no-fault evictions and fuelled acute rent inflation.

The financialisation of housing and deregulation of mortgage lending also led to a proliferation of buy-to-let mortgages. As a result, there is a growing recognition that the private rental and AST regime must be changed to prevent no-fault evictions, which could help address this crisis.

Ireland and England face common problems with homelessness and a burgeoning but unsustainable private rental sector. Simplistic supply and demand slogans belittle the part played by private rentals within a complex housing system. In fact, the interaction between social housing availability, financialisation and decades of unfit housing policies have created a unique scenario of homelessness in both countries. Läs mer…

Debate: The case of Pinar Selek is a stark reminder of the dangers faced by academics in Turkey and around the world

On 31 March 2023, a trial will be held in Istanbul against Pinar Selek, a sociologist, writer, feminist, anti-militarist and pacifist activist, exiled in France since the end of 2011 and facing life imprisonment in Turkey.

Selek has suffered from relentless judicial persecution by the Turkish authorities for 25 years – half a life. The reason? Her refusal to reveal the identity of the people she interviewed during an investigation she conducted on the Kurdish movements.

Arrested in July 1998, she was tortured and imprisoned for more than two years. Behind bars, she learns she is accused of having planted a bomb that would have exploded in the spice market in Istanbul, killing 7 people and injuring 121.

Selek was released at the end of December 2000, and acquitted in 2006, 2008, 2011 and 2014 following expert reports showing the tragedy was caused by the accidental explosion of a gas cylinder. Although the Turkish judiciary has cleared her four times, the prosecutor has appealed after each acquittal. After a silence of almost nine years, the Turkish Supreme Court announced the annulment of her latest acquittal and thus this new trial, which will be held in her absence.

Even before the 31 March hearing, Pinar Selek was the subject of an international arrest warrant by Turkey. Nine years after the researcher’s last acquittal, it is difficult not to link the Turkish justice’s revived interest in the academic to the forthcoming presidential and legislative elections scheduled for May and the celebration of the centenary of the Turkish Republic.

Beyond Pinar Selek’s personal plight, this episode illustrates the repression academics have faced in Turkey for years and which intensified further after the 2016 coup attempt.

Scientific freedom at risk

Wanted for “sociological crime”, the researcher has said, “I will not give up.”

Pinar Selek Support Committee, Basque Country. (Click to zoom in)
Fred Sochard, Author provided

Since arriving in France in 2011, Selek has defended a doctoral thesis in political science at the University of Strasbourg, published a myriad of scientific works and taught at the Université Côte d’Azur. After benefiting from a state grant for exiled artists and scientists for the first two years, she was awarded a permanent teaching-research position in 2022 by the University of Côte d’Azur.

In her plight, it is also academic freedom that is at stake. The presidents of the Côte d’Azur and Strasbourg universities, as well as numerous research laboratories and other university and scientific bodies have publicly taken a stance in her favour. University, student and activist support groups have also been formed. She was appointed honorary president of the Association of Sociologists of Higher Education. A delegation of nearly a hundred French and foreign representatives from the civil, associative, cultural, artistic, political, legal, scientific, academic and student worlds will travel to Istanbul to attend her trial, demand the truth and officially ask for justice to be done.

Engaged in a movement to open up the social sciences to society and to criticise scientific postures in the service of the established order, Pinar Selek is a “scientist in danger”. Even though she obtained French citizenship in 2017, she continues to suffer the political violence of an authoritarian regime that attacks the autonomy of the academic world – a phenomenon over which Turkey has no monopoly. Many Iraqi, Syrian, Afghan, Egyptian, Turkish, Iranian and other academics are paying a heavy price through state repression.

A situation that has escalated in Turkey since 2016

The situation of Pinar Selek reflects the rise of authoritarianism in Turkey, particularly noticeable since the strengthening of presidential powers following the April 2017 referendum.

Following the coup attempt on 15 July 2016 in which hundreds of civilians, soldiers, police officers lost their lives, a large number of academics have been designated as targets by the country’s president, Recep Tayyip Erdoğan. The signatories of the Academics for Peace Petition have been accused of terrorism, subjected to professional ostracism, prosecution and media lynching.

Among them, 549 academics have been forced to resign or retire, dismissed, revoked and banned from public service under the decree laws. The case of the “Gezi Seven” is emblematic of the massive repression of human rights in the country. Among them, publisher and patron Osman Kavala, imprisoned in 2017, was sentenced to life imprisonment for organising and financing the Gezi protests in 2013, without the possibility of parole after being unjustly convicted of an “attempted coup d’état”.

Although there was a Turkish Constitutional Court decision on 26 July 2019 acquitting them, these academics have lost their jobs and have been subjected to harassment in their professional environment. In addition, the Turkish National Research Agency blocks their publications. Terrorism charges continue, especially in relation to the Kurdish issue. For example, in October 2021, writer Meral Simsek was sentenced to one year, three months in prison for “propaganda for a terrorist organisation”.

Threats are also made to researchers based in France. In 2019, the mathematician Tuna Altinel, a teacher-researcher at the University of Lyon 1, who was accused of terrorist propaganda for having taken part in a public meeting in Villeurbanne on the army’s war crimes in the southeast of the country, was arrested in Turkey. Released after three months, he was only able to get his passport back and return to France in June 2021, after a long battle which is not over yet.

Hundreds of abusive arrests, acquittals – most often overturned on appeal by the Court of Cassation – and cases retried despite the recommendations of the European Court of Human Rights, punctuate this bleak picture. But the many hardships faced by researchers have strengthened their solidarity, as evidenced by their stories in Eylem Sen’s documentary Living in Truth.

In the name of the unconditional freedom of expression of researchers

“By condemning Pinar Selek, the Turkish government is attacking the independence of social science research,” reads the headline of

An article published in Le Monde in July 2022 by a group of academics, “By condemning Pinar Selek, the Turkish government is attacking the independence of social science research”, reminds us of the vulnerability of researchers to the attacks they face in many countries.

International conferences and declarations regularly reaffirm the protection of academic freedoms, but maintaining them requires constant struggles by the academic community and they are, in fact, never permanent students, professors and researchers are always at best suspected or threatened at worst arrested, tortured and killed, when strong powers are established to which they refuse to submit.

A “poetry activist”, as she likes to call herself, Pinar Selek, who is also the author of novels and children’s stories, is subject to political violence that can only be combated by denouncing and overturning her life sentence. Her tireless struggle against injustice, oppression and attacks on academic freedom, which are now being undermined in many parts of the world, illustrates that of all threatened scientists in authoritarian countries, but also in democracies.

Our solidarity with her is more than a moral duty. It is part of a shared struggle in the service of freedom of research and the exercise of a citizenship that must more than ever assert itself as transnational. Läs mer…

Climate change is accelerating – and the UK government is ’strikingly unprepared’

Read successive progress reports by the Climate Change Committee (CCC), the UK government’s statutory advisor on climate change, and you sense the growing frustration. Over the years, the CCC’s assessments of the government’s response to the climate crisis have become more critical, its recommendations more explicit and the tone more direct.

The latest progress report from the CCC concerns the UK’s preparedness for climate change, rather than progress toward net zero emissions, but it makes similar conclusions. The committee is scathing and has said that the country is “strikingly unprepared”.

The CCC criticised the government’s national adaptation programme for its lack of vision, ambition and reach. Sector by sector, the report lists failings in the government’s planning for climate change, or where plans exist, in their execution.

Thirteen sectors, from infrastructure and the built environment to health, nature and managed lands are forensically analysed, highlighting that “fully credible” planning is only in place for five of 45 key risk areas, while evidence that the country is becoming less vulnerable to climate change is “lacking across the board.”

If this sounds worrying it should be. It means UK citizens are being left exposed to increased risks of flood damage, food and water shortages, excess deaths from heatwaves, energy outages and infrastructure breakdown, among others.

An international challenge

Internationally, the CCC’s impatience is matched by increasingly urgent messages from the Intergovernmental Panel on Climate Change (IPCC), the foremost body of experts on climate change which issued its latest report earlier this month.

The approval process of IPCC reports, which require sign-off by member governments, keeps its language bland. Yet the report left no doubt that climate change is a rapidly escalating global risk and that, so far, policies and plans do not do enough to address it.

But it also led with a message of hope: there is still time to secure a liveable future for all if we act now.

Countries are not doing enough to cut greenhouse gas emissions – or prepare for their consequences.

Adaptation means dealing with the consequences of climate change, the effects of a warming climate that can no longer be avoided. Emissions reductions and net zero emissions targets deal with the root causes of climate change, the release of greenhouse gases into the atmosphere.

The two strategies are complementary. Yet adaptation campaigners have long lamented that building resilience to global heating gets second billing to net zero, and they have a point.

In the UK, this hierarchy was institutionalised. Within the CCC, emissions reductions was – and remains – the purview of the main committee, while adaptation was dealt with by a separate sub-committee. This judgemental prefix was dropped only recently.

The more global efforts to reduce emissions veer off track, the more important – and the more difficult – adaptation becomes. At the very least, we will have to adapt to a world that is between 1.5 and 2°C warmer. But on current trends, it could be much more. Failure in one part of the twin strategy puts pressure on the other, and at the moment we are failing on both.

The CCC does not explicitly make this connection in its progress report. It simply points to the fact that the first impacts of climate change can already be felt.

In the summer of 2022, recorded temperatures exceeded 40°C in the UK for the first time. The summer was also very dry, piling stress on ecosystems and farms. People are also becoming accustomed to more frequent winter storms – Dudley, Eunice and Franklin all struck the UK in February 2022.

The CCC estimates that 3,000 people died prematurely at the peak of the 2022 heatwave.
Leighton Collins/Shutterstock

Taking adaptation seriously is crucial because extreme weather is one of the most immediate ways in which the public will experience climate change. In the short term, it is vital that the UK is adequately prepared as these events become more likely and intense, and that it seizes the opportunities and benefits of the transition to a net zero economy.

Retaining global leadership

Even before the COP26 climate summit in Glasgow, the UK prided itself on being a leader in climate change policy. In the Climate Change Act 2008, the UK has a widely admired and frequently copied framework of climate governance. This system is now being put to the test.

The fact that there is an independent body (the CCC itself), which raises the alarm when things are off track, is an important part of this governance framework. This part of the system is working well. However, the real test is whether those warnings are heeded and policy is brought back on track.

The UK government has two immediate opportunities to change tack, and it should take them both. On emissions, the launch of its updated net zero strategy is imminent. The government had been sent back to the drawing board by the courts which had ruled the earlier net zero strategy inadequate. On adaptation, the government will launch a new national adaptation programme, for the years 2023 to 2028, later in the year.

The early signs are not good. The new net zero strategy is rumoured to be about “energy security” (and new oil licenses) as much as about climate action. In the meantime, industry bodies are warning that the UK is losing ground in the race to capture economic benefits from the zero-carbon transition, while the Met Office predicts another year of record-breaking temperatures.

If these warnings are not heeded, the CCC’s frustrations may soon be shared by the wider public, who will rightly question why adaptation measures were not taken earlier.

Don’t have time to read about climate change as much as you’d like?
Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 10,000+ readers who’ve subscribed so far. Läs mer…

Does democracy fuel corruption? Most Ghanaians don’t think so

Does democracy breed corruption – particularly in developing countries?

There are strong advocates of the theory. And strong detractors.

Some studies conclude that democracy aggravates corruption. For instance, noted scholar of public policy Jong-Sung You’s work explores the relationships between democracy, inequality and corruption. He showsin a study of three East Asian countries that democracy can worsen corruption when a country has high levels of inequality. This, in turn, increases clientelism and patronage politics and state capture.

Other studies show that democracy can help combat corruption. One study by academic Michael Rock, using data from multiple countries shows that corruption initially increased following democratisation. But that it declined later as the age of democracy increases – the turning point is between 10 to 12 years.

This ‘age of democracy’ theory indicates that as democracy gets older, it allows time for the rule of law to be strengthened and transparent and accountable institutions to take hold which are capable of controlling corruption.

Ghana provides an interesting case study. It has been a democracy for the last 29 years. At the same time, corruption remains a monumental challenge.

In a recent paper I explored the issue. I interviewed Ghanaian politicians, academics, anti-corruption activists, and journalists about whether democracy fuels corruption in Ghana.

A fifth of those interviewed argued that democracy fuels corruption in Ghana while about 80% disagreed. But most believed that the way democracy is practised is to blame for corruption – not democracy itself.

My study does not suggest there is less corruption in a dictatorship compared to democracy. Instead, I conclude that corruption is still prevalent in Ghana 29 years after democratic elections because the country has a flawed democracy. There has been a failure to establish and implement robust accountability mechanisms to control corruption effectively.

Democracy fuels corruption

My study drew on 25 in-depth interviews with politicians, academics, anti-corruption activists and media personnel. A number of arguments were put forward by the 20% who believe that democracy leads to more corruption. One was that democracy allows some people to gain power and amass illegitimate wealth without consequences.

A politician said:

It looks like a group of friends come together to form a political party, maybe I will say, with the sole interest of looting the state with little intention of solving people’s problems. But unfortunately, we have only two main parties always positioned for power, and it is always family and friends, like a cartel, always come together, steal, and go and another will come.

Others commented that politicians who win power through the polls loot state coffers to pay off past campaign expenses, finance future elections, and accumulate wealth for future use should they lose power.

This group also reported that political parties in power often shielded their corrupt members to protect party reputation and boost electability. This resulted in impunity.

Some also argued that securing justice in Ghana’s democratic system was hard to achieve. This allowed lawlessness and corruption to thrive.

As another politician said:

Everybody is finding a way to see a judge and pay something, and the prosecutor fails to go to court or drops cases, making people continue to misappropriate public funds because they know they can get away with it.

This points to the fact that the rule of law and checks and balances in government are weak.

Democracy isn’t the problem

A range of arguments were put forward among the 80% who believed democracy could not be blamed for persistent corruption. In their view, democracy has helped promote information flow, shedding more light on corruption than in authoritarian regimes.

A media practitioner said:

Whether military or civilian rule, corruption is there. I don’t think democracy in itself has contributed to the problem of corruption. Suppose we lived in a country where there was no democracy, if a monarchy or a military ruler that did something wrong in government, you couldn’t freely come out to talk about it. Democracy has instead helped us talk about it and bring corruption issues to bare, at least in the public domain.

A political scientist and anti-corruption activist put it this way:

No one can say those authoritarian regimes do not see or harbour corrupt practices except that in authoritarian regimes or dictatorship, information flow is limited, so you don’t get to know.

The fact that democratic freedoms had facilitated information flow and shed light on corruption had created an erroneous impression that democracy fuelled corruption more than authoritarian regimes.

The practice is what counts

In response to the question on whether democracy has helped fuel corruption in Ghana, one respondent, a political science scholar, said:

I don’t think so. It is rather the corrupt practice of democracy that brings about corruption. The corrupt practice of democracy that brings about winner-takes-all politics – we have won, it is our time to chop {enjoy} – that is what brings about corruption. But democracy itself wouldn’t bring about corruption.

Another interviewee commented:

The politics of democracy fuel corruption, but it is not the democratic system of government that fuels corruption. It is the way we do it {democracy}, the way we practise that sometimes fuels corruption.

Other participants commented that monetisation of elections and the lack of transparency in political party funding produced corrupt leaders. This made it difficult to combat corruption.

Also, according to interviewees, Ghana’s 1992 constitution provided insufficient checks and balances. For example, the electoral system enables a winner-takes-all politics in which the group, or party, that wins at the polls (and their allies) are able to monopolise resources.

Next steps

For democracy to reduce corruption, several measures are needed. These views echo arguments made by scholars Landry Signé and Koiffi Korha.

Research participants emphasised addressing extreme executive power while strengthening the rule of law and horizontal accountability institutions. These include the legislature, the judiciary, and auditing and anti-corruption bodies.

Participants also recommended sustained public pressure on whoever is in power to ensure political commitment to combating corruption. Läs mer…

Ancient DNA is restoring the origin story of the Swahili people of the East African coast

The legacy of the medieval Swahili civilization is a source of extraordinary pride in East Africa, as reflected in its language being the official tongue of Kenya, Tanzania and even inland countries like Uganda and Rwanda, far from the Indian Ocean shore where the culture developed nearly two millennia ago.

Its ornate stone and coral towns hugged 2,000 miles (3,200 kilometers) of the coast, and its merchants played a linchpin role in the lucrative trade between Africa and lands across the ocean: Arabia, Persia, India, Southeast Asia and China.

By the turn of the second millennium, Swahili people embraced Islam, and some of their grand mosques still stand at the UNESCO World Heritage sites of Lamu in Kenya and Kilwa in Tanzania.

Self-governance ended following Portuguese colonization in the 1500s, with control later shifting to the Omanis (1730-1964), Germans in Tanganyika (1884-1918) and British in Kenya and Uganda (1884-1963). Following independence, coastal peoples were absorbed into the modern nation-states of Somalia, Kenya, Tanzania, Mozambique and Madagascar.

The Swahili island settlement of Kilwa, in present-day Tanzania, grew over centuries to be a major coastal city and trading center.
Pictures From History/Universal Images Group via Getty Images

So who were the Swahili people, and where did their ancestors originally come from?

Ironically, the story of Swahili origins has been molded almost entirely by non-Swahili people, a challenge shared with many other marginalized and colonized peoples who are the modern descendants of cultures of the past with extraordinary achievements.

Working with a team of 42 colleagues, including 17 African scholars and multiple members of the Swahili community, we’ve now published the first ancient DNA sequences from peoples of the Swahili civilization. Our results do not provide simple validation for the narratives previously advanced in archaeological, historical or political circles. Instead, they contradict and complicate all of them.

Colonization affected how the story was told

Western archaeologists in the mid-20th century emphasized the connections of the medieval Swahili to Persia and Arabia, sometimes suggesting that their impressive achievements could not have been attained by Africans.

Post-colonial scholars, including one of us (Kusimba), pushed back against that view. Earlier researchers had inflated the importance of non-African influences by focusing on imported objects at Swahili sites. They minimized the vast majority of locally made materials and what they revealed about African industry and innovation.

But viewing Swahili heritage as primarily African or non-African is too simplistic; In fact, both perspectives are byproducts of colonialist biases.

The truth is that colonization of the East African coast did not end with the departure of the British in the middle of the 20th century. Many colonial institutions were inherited and perpetuated by Africans. As modern nation-states formed, with governments controlled by inland peoples, Swahili people continued to be undermined politically and economically, in some cases as much as they had been under foreign rule.

Decades of archaeological research in consultation with local people aimed to address the marginalization of communities of Swahili descent. Our team consulted oral traditions and used ethnoarchaeology and systematic surveys, along with targeted excavations of residential, industrial and cemetery locations. Working with local scholars and elders, we unearthed materials such as pottery, metal and beads; food, house and industrial remains; and imported objects such as porcelain, glass, glass beads and more. Together they revealed the complexity of Swahili everyday life and the peoples’ cosmopolitan Indian Ocean heritage.

For generations, Swahilis have maintained matrilineal family burial gardens such as this one in Faza town, Lamu County.
Chapurukha Kusimba, 2012, CC BY-ND

Ancient DNA analysis was always one of the most exciting prospects. It offered the hope of using scientific methods to obtain answers to the question of how medieval people are related to earlier groups and to people today, providing a counterweight to narratives imposed from outside. Until a few years ago, this kind of analysis was a dream. But because of a technological revolution in 2010, the number of ancient humans with published genome-scale data has risen from nothing to more than 10,000 today.

Surprises in the ancient DNA

We worked with local communities to determine the best practices for treating human remains in line with traditional Muslim religious sensitivities. Cemetery excavations, sampling and reburial of human remains were carried out in one season, rather than dragging on indefinitely.

A detailed line drawing captures the way one person’s remains were discovered during cemetery excavation at Mtwapa in 1996.
Eric Wert, 2001, CC BY-ND

Our team generated data from more than 80 people, mostly elite individuals buried in the rich centers of the stone towns. We will need to wait for future work to understand whether their genetic inheritance differed from people without their high status.

Contradicting what we had expected, the ancestry of the people we analyzed was not largely African or Asian. Instead, these backgrounds were intertwined, each contributing about half of the DNA of the people we analyzed.

We found that Asian ancestry in the medieval individuals came largely from Persia (modern-day Iran), and that Asians and African ancestors began mixing at least 1,000 years ago. This picture is almost a perfect match to the Kilwa Chronicle, the oldest narrative told by the Swahili people themselves, and one almost all earlier scholars had dismissed as a kind of fairy tale.

Another surprise was that, mixed in with the Persians, Indians were a significant proportion of the earliest migrants. Patterns in the DNA also suggest that, after the transition to Omani control in the 18th century, Asian immigrants became increasingly Arabian. Later, there was intermarriage with people whose DNA was similar to others in Africa. As a result, some modern people who identify as Swahili have inherited relatively little DNA from medieval peoples like those we analyzed, while others have more.

One of the most revealing patterns our genetic analysis identified was that the overwhelming majority of male-line ancestors came from Asia, while female-line ancestors came from Africa. This finding must reflect a history of Persian males traveling to the coast and having children with local women.

One of us (Reich) initially hypothesized that these patterns might reflect Asian men forcibly marrying African women because similar genetic signatures in other populations are known to reflect such violent histories. But this theory does not account for what is known about the culture, and there is a more likely explanation.

Even under colonial rule and through modern times, traditional Swahili culture has retained its matriarchal nature.
Pictures from History/Universal Images Group via Getty Images

Traditional Swahili society is similar to many other East African Bantu cultures in being substantially matriarchal – it places much economic and social power in the hands of women. In traditional Swahili societies even today, ownership of stone houses often passes down the female line. And there is a long recorded history of female rulers, beginning with Mwana Mkisi, ruler of Mombasa, as recorded by the Portuguese as early as the 1500s, down to Sabani binti Ngumi, ruler of Mikindani in Tanzania as late as 1886.

Our best guess is that Persian men allied with and married into elite families and adopted local customs to enable them to be more successful traders. The fact that their children passed down the language of their mothers, and that encounters with traditionally patriarchal Persians and Arabians and conversion to Islam did not change the coast’s African matriarchal traditions, confirms that this was not a simple history of African women being exploited. African women retained critical aspects of their culture and passed it down for many generations.

How do these results gleaned from ancient DNA restore heritage for the Swahili? Objective knowledge about the past has great potential to help marginalized peoples. By making it possible to challenge and overturn narratives imposed from the outside for political or economic ends, scientific research provides a meaningful and underappreciated tool for righting colonial wrongs. Läs mer…

Why children misbehave when they are tired

Being tired is a feeling we often experience. When we do certain activities – physical or mental – over a period of time, or even after experiencing intense emotional states, we feel tired, perhaps even exhausted.

We could define fatigue as a lack of strength after physical, intellectual or emotional work. Boredom, unhappiness, disappointment, weariness, tedium or annoyance can also leave us exhausted.

In any case, fatigue has curious effects on our behaviour, resulting in greater difficulty maintaining self-control.

This is very perceptible in children, because when they are tired, either after strenuous activity or as a result of boredom or disappointment, they tend to behave in ways that annoy us. They tend to “misbehave”. But why is this?

Failures in the brain control tower

Let’s start by talking about how the brain works. The brain is the organ of thought where all our behaviours are generated and managed. Each of its different areas fulfils specific tasks within the overall functioning of the organ.

Behavioural control is handled specifically by an area called the prefrontal cortex. It is located in the frontmost part of the brain, just behind the forehead, in the most superficial layers of neurons – hence its name.

The prefrontal cortex is responsible for managing complex cognitive tasks, which are grouped under the name of executive functions. They work like airport control towers, making all air traffic flow smoothly in a flexible, non-static way, so that it can adapt to any situation that may arise: a change in atmospheric conditions, a flight delay, etc. In other words, the prefrontal cortex helps us to control our behaviour.

Executive functions include the ability to reflect and plan, to make decisions based on reasoning and to rationalise and manage our emotional state.

Also included in this group is working memory, which is the set of processes that allows us to store and temporarily handle information for the performance of complex cognitive tasks such as language comprehension, reading, mathematical skills, learning or reasoning – not to mention cognitive flexibility, which is the brain’s ability to adapt our behaviour and thinking to changing, novel and unexpected concepts and situations, or the mental capacity to contemplate several concepts at once.

What does all this have to do with fatigue and how does it affect the behaviour of adults and children? It’s quite simple. Although we may like to boast that we have a very large brain, the reality is that it represents only 2 or 3% of the total mass of our body. And yet it consumes no less than 20-30% of metabolic energy – a striking disproportion!

And of the entire brain, the part that consumes the most is precisely the prefrontal cortex.

When we are short of energy, we are more likely to mess up

When we are tired, our metabolism tends to spread out the usable energy, thus decreasing the energy available for the prefrontal cortex to perform its functions with maximum efficiency.

In other words, we find it harder to think, plan, decide, manage emotions and store and handle information because the prefrontal cortex has less fuel to function. This also makes our thoughts less flexible and more rigid. As a consequence, we lose the ability to control our own behaviour.

So when we are tired, we tend to say things that we shouldn’t, that we know might hurt people we care about. And we do this because the executive functions – the control tower of our behaviour – work less efficiently.

And the same thing happens to children. Despite knowing that there are things they cannot do or that we do not allow them to do (and that they are well aware of), when they are tired, the likelihood of them doing these things, of them “misbehaving”, increases.

Shutterstock / MCarper

Boredom has a similar effect to tiredness

Interestingly, when we are bored, disappointed or fed up, something similar happens; although the reason is slightly different.

It turns out that when we are demotivated, the brain also receives less energy, meaning that the prefrontal cortex cannot function at full capacity. Or, to put it the other way around, motivation increases the blood flow to the brain and, with it, the available energy, which in general improves the functionality of executive functions.

That is why, when we are motivated we usually think, plan and decide better, and can manage our emotions much better. Although we should not overdo it. Excessive motivation can also hyper-energise the brain, reducing the efficiency of its functioning, as a recent study has demonstrated.

And a curious and final fact: there’s a good side to being tired. After having carried out a strenuous activity, we tend to be more creative, because when self-control fails, ideas emerge without filters – or with fewer conscious ones. Läs mer…

These neurons are the reason you yawn when you see others do it – and they could help us teach children more creatively too

Have you ever wondered why when we see someone yawn, we yawn almost immediately? Or how newborns imitate facial gestures like sticking out their tongue? And what about how we learn to use scissors or to colour?

It all has a lot to do with a particular type of neuron called “mirror neurons”.

What are mirror neurons?

Mirror neurons are amazing neurons that participate in important processes such as learning, empathy and imitation.

They were discovered by chance by the Italian neurobiologist Giacomo Rizzolatti in 1996. Looking at the brain of a macaque, Rizzolatti and his team recorded neurons that were activated not only when the animal carried out an action, but also when it observed another animal doing the same activity. What’s more, in both cases the premotor cortex was activated in an identical way.

It was soon found that exactly the same thing happens in humans. For example, when we watch someone climb stairs, the motor neurons that correspond to those movements are activated without us taking a single step. When we observe another individual performing an action, without even having to speak, our mirror neurons can put us in the same situation, simulating the action mentally as if it were happening to us.

This type of nerve cell even enables us to understand the intention with which an action is carried out.

Another of mirror neurons’ properties is that they are activated by the sound associated with an action. For example, when they hear paper being torn, they mentally emulate that action – even if we do not actually see it taking place.

Where are they?

Mirror neurons are located in four brain regions that communicate with each other: the premotor area, the inferior frontal gyrus, the parietal lobe, and the superior temporal sulcus. Each of these is responsible for a different function:

The premotor area manages movements and controls muscles
The inferior frontal gyrus is involved in executive control processes, the management of social and affective behaviours and decision making
The parietal lobe analyses visual sensory information
The superior temporal sulcus is involved in auditory processing and language.

Learning and empathy

The existence of mirror neurons is essential for our species. That’s mainly because of the role they play in learning by imitation and observation but also because they participate in language acquisition and are essential in the development of empathy and social behaviour – they allow us to understand the actions of other people and their emotions.

Mirror neurons are implicated in numerous clinical conditions. They are affected by autism, schizophrenia, apraxia (an inability to perform motor tasks) and neurodegenerative diseases, among others.

For example, in autism, there are motor, language and social problems that coexist. It is no coincidence that all these functions are related to brain areas where mirror neurons are located.

Harnessing mirror neurons in the classroom

We can consider observational learning to be any moment in which an action is observed and something new is learned or previous knowledge is modified. We must not confuse imitation (for example, copying an individual’s gestures) with observational learning. The latter is a change that lasts in the individual and produces a response.

By observing a process, mirror neurons prepare us to imitate the action being observed. If, while teaching, we combine observational learning with student creativity, we will obtain more efficient learning. The lesson will be internalised and will last over time.

All of this leads us to highlight the important role that educators play in the classroom. The students observe all the actions carried out by their teacher. For this reason, we should look beyond traditional teaching (which is merely expository and static in nature) and carry out more activities that allow for observation skills to be developed.

Another aspect to highlight is the attitude that teachers have in the classroom. Mirror neurons allow us to understand the intentions and emotions being transmitted. Those passionate teachers who teach their subjects with enthusiasm and joy achieve a greater level of concentration and observation from the student, capturing their attention for a longer amounts of time and infecting them with their emotion.

For all these reasons, there are different educational methodologies that allow us to combine this knowledge about mirror neurons with useful tools that fit into the classroom context. In any case, it is essential to incorporate new strategies to encourage motivation, as well as to use manipulative tasks (laboratory sessions, practical cases, etc.) that allow the contents being learned to be used and internalised.

All the events that take place in the classroom, the dynamics of the classes and the emotional aspects that the teacher transmits to the students will condition the learning and experience that the students have in the classroom. Läs mer…

Israel protests: Netanyahu delays judicial reforms over fears of ’civil war’ – but deep fault-lines threaten future of democracy

Despite Benjamin Netanyahu’s decision to delay the planned judicial reforms that have so destabilised Israeli society, the country’s crisis of democracy is far from over.

Protests at Netanyahu’s plan, which have rocked Israel for weeks, redoubled in intensity last weekend after Netanyahu sacked his Likud colleague and defence minister, Yoav Gallant, for calling on him to freeze the reform.

Within hours Netanyahu announced that the plan would be delayed until May. But he ignored advice from the US president, Joe Biden, to “walk away” from the judicial overhaul, insisting he doesn’t make decisions based on pressure from abroad.

The government’s plans to weaken the powers of Israel’s supreme court have been savaged by opponents as a major attack on the checks and balances within Israel’s unwritten constitutional system – an attack in democracy itself.

This forced pause is a significant gain for the mass protest movement, which has seen not merely public demonstrations but also refusals by reservists to participate in training exercises and threatening not to turn up for service generally. In a country where army service is the norm, this has gone to the heart of Israeli identity.

Read more:
Israelis protest Netanyahu government’s brutality and plans to undermine rule of law

Netanyahu has blamed everyone but himself for delaying the judicial reforms. Listening to him announce the pause, it wasn’t the provocative nature of the changes that his far-right government wanted to steam-roller through the Knesset (parliament) that was to blame for the protests. Rather it was what he called “a minority of extremists that are willing to tear our country to shreds … escorting us to civil war and calling for refusal of army service, which is a terrible crime”.

The speech was part seduction, part threat. He would consult on the constitutional reform, he would protect human rights, but he insisted that the elected government had a right to implement its programme.

Power plays

The speech had been delayed from the morning as the prime minister first needed to secure his coalition’s support. The ultra-Orthodox parties had already fallen behind Netanyahu, but it was his far-right flank that proved more difficult.

In the end, Bezelal Smotrich, the finance minister and leader of the Religious Zionist party caved in, realising that his resignation could end the coalition and with it the far-right’s first taste of power. National security minister, Itamar Ben-Gvir, the leader of the Jewish Power Party, held out for more: the creation of a national guard under his command, something many have described as a “private militia”.

Netanyahu will be hoping that freezing the judicial policy will demobilise the mass protests. His decision comes as Israel is about to enter a holiday period beginning with Passover (or Pesach) on April 5 and culminating in the 75th anniversary of Israel’s creation as a state on April 25.

The prime minister will hope this acts as a distraction. But it is unlikely that the groundswell of opposition will dissipate over the holidays.

Israel’s president, Isaac Herzog, has called a meeting of the government and opposition leaders to negotiate a way forward. Yair Lapid, who is the leader of Israel’s main opposition party Yesh Atid, and Benny Gantz, a former defence minister and leader of the National Unity Part,y will also attend.

But it seems that the minister of justice, Yariv Levin – the architect of the constitutional changes – has not been invited. Nor has Netanyahu, who has been banned from intervening in judicial matters by the attorney-general, due to his ongoing criminal cases.

Long-term Netanyahu confidante, the strategic affairs minister Ron Dermer, will represent Likud. How talks with the key figures in the government missing will work is a moot point. Nor is it clear what kind of compromise could be agreed.

The government also hopes to mobilise its own supporters in favour of its constitutional changes. On Monday night, tens of thousands of right-wing and settler demonstrators turned out in Jerusalem and were addressed by Smotrich and Ben-Gvir. Netanyahu wants to see more of these.

Power brokers: right-wing members of the Netanyahu government, Itamar Ben-Gvir and Bezalel Smotrich, sitting in the Knesset, January 2023.
Amir Cohen/ Credit: UPI/Alamy Live News

On the surface the constitutional changes look very similar to action taken by populist governments in Hungary and Poland. But in Israel the conflict combines the populist politics that in many ways Netanyahu pioneered with a fundamental battle over two visions of Israel.

The broadly liberal democratic outlook of the protesters, the opposition parties – even some in Likud – clashes with those who want to see an Israel more in tune with their particular interpretations of Judaism.

Clash of cultures

The 1948 Israeli Declaration of Independence did not create a Jewish theocracy but a democratic civil state with rights for all irrespective of religion or ethnicity. The far right, and some in the religious Orthodox parties, are not comfortable with these values. So, the battle over the supreme court exposes deep fissures in Israeli society.

Netanyahu referred to a possible civil war in his speech on March 27 and many are concerned that changing the constitutional checks and balances disturbs the political and cultural consensus established from 1948.

Netanyahu shelves his plans for judicial reforms.

Israel was created by the left. And international support for its establishment came from the Soviet bloc as well as broadly social democratic politicians in the west. But Israel’s founders, particularly the first prime minister David Ben Gurion, were keen to balance their socialist outlook with respect for the religious community.

That balance meant not drafting a constitution. Instead, Israel created a piecemeal approach to constitutional issues and in the process forged a strong legal system and an internationally respected judiciary. This is now imperilled by the proposed reforms.

The tensions in Israeli society that the judicial reform policy has unleashed are unlikely to diminish over the next month. Netanyahu presides over a country which is not only violently confronting its Palestinian neighbour – but is increasingly at war with itself. Läs mer…

The astonishing life and music of Emahoy Tsegué-Maryam Guèbrou, the Ethiopian nun who’s died at 99

As the Ethiopian pianist Emahoy Tsegué-Maryam Guèbrou approached the age of 100, fans and music critics from the US to Ethiopia, Israel to Europe have been eagerly rediscovering her work. But the Addis Ababa-born pianist, who has died at the age of 99, has been a source of fascination to connoisseurs. Especially after a disc in the iconic Ethiopiques series (issued by Buda Musique) was dedicated to her work in 2006.

The classically trained musician was associated with the jazz genre and spent her life in a convent. She will remain in death as much as in life a figure of beautiful contradictions. Her legacy as a female instrumentalist, a religious composer of secular music and an ascetic who created work of astounding beauty requires of fans to listen for nuance.

A monastery in Jerusalem

The Church of the Holy Sepulchre in the Christian quarter of Jerusalem is as good a place as any to begin to explore that legacy. Enthusiastic fans of Ethiopian music might turn up, having heard a rumour that there is an Ethiopian monastery on the roof. The church has been embroiled for decades in a quarrel over land rights. But in all seasons, Ethiopian monks sleep here in austere pods with close proximity to history and little protection from the elements.

Buda Musique

Guèbrou lived this monastic life for decades, having moved to Jerusalem in the 1980s during a difficult period of military dictatorship under Mengistu Haile Mariam and never returning permanently to Ethiopia. She was known to continue to practise the piano in her room in the convent well into her 90s.

While musicians spotlighted in the Ethiopiques series tour Jerusalem periodically, she was the only one to move there. So, among her other legacies, her life story brings the Kebra Negast – the foundational epic literary work of Ethiopian civilisation centred on the transfer of the Ark of the Covenant from Jerusalem to Ethiopia – full circle into the complex geopolitics of the present day.

Who was Emahoy Tsegué-Maryam Guèbrou?

Guèbrou was photographed extensively in the bedroom of her Jerusalem convent, the scene of her remarkable fourth act. How she got there is as unbelievable a story as any of the 20th century.

Guèbrou’s life has been well catalogued in profiles and documentaries, and by a foundation created by her family.

She was born in 1923 to an upper-class family in Addis Ababa, and she was trained in western classical music as a young woman. She is known in particular for her virtuosity on the piano, but her training was wide-ranging, including expertise on the violin.

Guèbrou plays the piano in her room in Jerusalem.

As Ethiopia underwent a series of dramatic political changes across the 1900s, Guèbrou’s rights as an educated woman shifted, too. With the Italian occupation in the 1930s, she and her family spent time in a prison camp.

Under the rule of Emperor Haile Selassie, she spent time as a working musician alongside the Imperial Bodyguard. With the rise of a military junta that ruled Ethiopia from 1974 to 1991, she departed for Jerusalem. She remained there until the end of her life, a religious Christian adopted as a figure of reverence among the music fans of Israel’s 140,000 citizens of Ethiopian lineage.

The power of her music

Guèbrou’s work is usually described as highly syncretic – drawing from a wide variety of musical sources – mostly non-Ethiopian references from the western and jazz canons. Critics point to her long improvisations and her free adoption of the tonal ranges that the piano can accommodate. No doubt, her gender and instrument influence this description. Women in Ethiopia are most often singers and dancers, not instrumentalists. If they do play an instrument, it is the krar, a six-stringed lyre on which they accompany their own singing.

Then there is the sound of her piano playing. Western instruments have been long familiar to Ethiopia; the Arba Lijjotch, an orchestra of 40 Armenian orphans, made an impression on Selassie in 1924. The emperor took a liking to brass bands and was rarely received in public without one. The sound of the brass band merged in the 1960s and 1970s with the sound of the Azmari, a self-accompanied folk-poet, creating the musical style that is so well captured in the Ethiopiques albums.

Guèbrou attends a tribute concert in her honour.

Each volume is dedicated to a different musician, such as singers Mahmoud Ahmed and Asnaqetch Werqu, and volume 21 is a comprehensive study of Guèbrou’s repertoire. As music critics have noted, her work sounds markedly different from the heavy brass of the 1970s scene in Addis Ababa. But that is sometimes more a question of instrumental timbre than tonality. A careful listen helps to identify close engagement with a variety of Ethiopian musical traditions.

Guèbrou recorded over 100 tracks, but footage of public performances is exceedingly rare. Most fans will know the image of her sitting by her piano in religious garb as she speaks to the journalists who sought her out over the years. Religion, music and good deeds are indistinguishable from one another, as she explains the purpose of the charity founded in her name:

After I asked God for his will, I determined to publish and use the money to fund children and young people for their education.

The track Homesickness has no lyrics, so it can be mistaken for an original composition. But its first measures reveal that it is none other than Tezeta, arguably Ethiopia’s most famous and widely recorded song.

Guèbrou’s famous track, Homesickness.

Guèbrou has started it with the same ascending pentatonic motive (a musical scale with five notes per octave) as several of the artists who cover the song on Ethiopiques Volume 10: Ethiopian Blues and Ballads. While this particular version has no lyrics, the homesickness described in the song refers to a lost lover and to a lost time and, for an emigrant, a lost homeland.

The amazing contradictions in Guèbrou’s life layer alternative readings on top of this loss. And they pose to the close listener the possibility that this iconic performer of Tezeta – a woman in exile without romantic entanglements – embodied for Ethiopians an elusive state of total artistic and spiritual unity. Läs mer…

Against baseball’s new pitch clock

Baseball moves very fast. That’s how it seems to me, anyway.

Just try coaching a Little League game; decisions pile up like branches on a tree, as tactical and strategic considerations multiply.

And as a player, when it’s time to act, you need to do so before you even get to the “t” in “think,” as a coach I know used to say.

That’s why it’s hard for me to shake the worry that the executives who restlessly tinker with the rules in an effort to speed up the game are doing so less as its reliable custodians and more as marketers.

Why else would they have adopted the new pitch clock rule?

Beginning this season in Major League Baseball, pitchers will have 15 seconds to throw when the bases are empty and 20 seconds when there’s a runner on base. Hitters need to be in the box, looking at the pitcher, with 8 seconds left on the clock. Violators will be punished by automatic balls or strikes. There are new time limits on managers’ deliberations on whether to challenge calls on the field, too.

But to me, the idea that you need to get things to move faster because it might seem to you – or to potential customers – as if nothing is going on is either a brazen sellout or a remarkable piece of ignorance.

During these purported empty spaces of inaction, the game’s drama is actually unfurling right there in front of you. As I explain in my book “Infinite Baseball,” you just have to know what to look for.

Seeing the game better

Every plate appearance – that willful and wily exchange between batter and pitcher – unfolds at the center of attention of every player and spectator.

Hitters develop ways of excelling – or, I should say, coping – and to some extent their strategy consists in scratching out seconds and milliseconds to collect their thoughts, to read the signals, to settle themselves in the box by breathing in, breathing out.

Pitchers, meanwhile, work to control the rhythm and keep the hitters off guard by concealing what’s coming next.

The scrutiny can be vicious. Twelve-year-old baseball players routinely burst into tears when they have struck out or grounded out yet again.

Former pitcher Felix Hernandez screams into his glove after losing his battle with a hitter and surrendering a home run during a game in 2014.
Stephen Brashear/Getty Images

Professional baseball players, no less than their younger counterparts, need tactical guidance and emotional support. The manager is cool in the dugout, surrounded by consiglieri, and in constant contact with coaches on the first and third baselines, who, for their part, are talking to the players.

It’s not rocket science, to be sure; but there is a lot to think about – whether to take a pitch, or fake bunt, or run on contact, or hold, or steal, or sacrifice, and on and on, with answers depending on the situation that itself varies pitch to pitch. Players need all the help they can get.

Clock time is not the only time. Pitches and plate appearances and outs and innings are another way to mark time, the way time in tennis is counted in service games, sets and match points.

In my view, baseball’s problem is not that it is too slow. It’s that it’s too fast. There’s a lot of action; it’s just that novice fans may not have the eyes to see it.

That’s what baseball should be helping viewers do: Slow the game down so they can see it better; or rather, teach them to see it better.

Baseball is an opportunity to learn to see, to notice the detail, to pay attention and uncover the decisions that inform everything that happens on the field. Fielders shift their positions, batters adjust their stances, catchers vary the target they provide, runners shorten or extend their leads.

It all carries information.

The game only shows up if you do

But baseball executives who sell the game, and are willing to sell it out, do so by making the game itself expendable. Your typical MLB game is drowned out in distracting bright lights, ear-splitting music, sideline games and giveaways. Roving cameras encourage fans to dance for the public or make out with the person next to them.

Fun is good, and I enjoy the carnival atmosphere, too. (Although if it’s a circus you want, you might prefer the Savannah Bananas, a wildly popular minor league team whose players wear kilts and who have adopted a rule calling a home run an out if a fan catches the ball.) And I don’t begrudge baseball’s entrepreneurs their payday. But no wonder the game seems boring beside all that. The game shows up only if you do.

The problem is not change. Imagine if baseball had never evolved from its past incarnations – the dead ball era when home runs were a rarity, segregated leagues and no free agency. And baseball responded to the remarkable 1968 season, known as “the year of the pitcher,” by actually lowering the pitcher’s mound to shift advantage back to batters.

Baseball, like the law – and like society itself – evolves.

Actually, there is another respect in which baseball is like the law. In baseball, the events on the field of play matter less than the assignments of responsibility and the judgments of praise – and blameworthiness.

Real baseball is in the scorebook, for it is there that hits are sorted from physically indiscernible patterns of action that count as fielder’s choices, or errors, or sacrifices. It is there that mere runs separate themselves from earned runs, and that stolen bases assert themselves as achievements that don’t come down to mere defensive indifference.

Each game tells a story.
Brace Hemmelgarn/Minnesota Twins via Getty Images

This is why keeping score in baseball is never just marking down what happens, like hatch marks on a prison wall marking the passage of time; it is always, rather, a thoughtful reflection on the meaning of events, and so is more like a daily journal.

And it is baseball’s problems – pertaining not only to the question of who’s winning, but rather who deserves credit or blame for this rapid-fire thing that just happened on the field – that define the game and preoccupy players, coaches and fans.

It is this space, one that is not limited to the physical field of play, that finally defines the national pastime and joins players and fans alike in its preservation and celebration.

I certainly appreciate that shorter games, like shorter books, have a certain attraction. They are less demanding and more user-friendly. And there is no doubt that games in MLB have gotten much longer than they used to be.

But baseball’s executives should avoid ruining the game in order to save it. Läs mer…

Poole oil spill expert Q&A: why is there an oil field in Dorset anyway?

A major incident has been declared after a leaky pipeline caused 200 barrels of oil “reservoir fluid” to spill into Poole Harbour in Dorset, south-west England. The oil was being transported from nearby Wytch Farm, which hosts western Europe’s largest onshore oil field.

We asked oil industry expert Mark Ireland of Newcastle University to explain why there is an oil field in Dorset, what sort of oil it produces, and what happens next.

Why is there an oil field in Dorset anyway?

The oil is mostly found about 1,500 metres below the surface within Sherwood Sandstone, a common rock in Britain which was deposited in the Triassic period.

The oil field is often referred to as an “onshore” field but actually extends from just to the west of Wareham (onshore), to the east out beneath Poole Harbour and Bay.

The oil field (grey) spans on and offshore. The urban areas at the top of the page are the towns of Bournemouth and Poole. The inlet at the centre of the page is Poole Harbour.
Mark Ireland / Google Earth / NSTA, Author provided

Exploration for oil in the area can be traced back to 1935, with the nearby Kimmeridge Field discovered in 1959. The first oil was produced from Wytch Farm in 1979.

How is the oil extracted?

More than 200 production wells have been drilled at Wytch Farm over the years, most using a technique known as extended reach drilling. Extended reach drilling makes use of directional drilling, which enables the direction and deviation of a well towards a target that is not immediately below the drilling pad.

In the case of Wytch Farm, this has enabled multiple wells to be drilled from single locations. This enabled the previous operators and now the current operators Perenco to reach the oil beneath Poole Harbour, which otherwise would have been inaccessible due to the environmentally sensitive nature of the harbour and surrounding wetlands.

Sea water has been injected into the rocks to maintain pressure and ensure more oil flows out, a common practice in many late life oil fields.

How much is there and when will it run out?

Wytch Farm has been estimated to have somewhere between 700 million and 900 million barrels originally in place. Of this, about half will be extracted.

At its peak in 1999, it produced almost 100,000 barrels of oil per day, but currently it produces around 14,000 barrels per day. Wytch Farm is a “late life” field, which means production is in decline and this is likely to continue.

This oil is extracted along with large volumes of water, and typically the oil to water ratio decreases during a field’s life. That’s why the 200 barrels of reservoir fluid in the latest leak are reportedly 15% oil and 85% water.

RSPB Arne is a nature reserve on the southern side of Poole Harbour.
Mark Christopher Cooper / shutterstock

The current operators proposed in 2016 to extend the production of the oilfield out to 2037. Wytch Farm currently provides about 1% of the UK’s crude oil demand, although accounts for more than 75% of the production from “onshore” fields. When the field does reach its end of life, it could cost around £150 million to decommission.

Why haven’t I heard of this before?

Part of the answer is that it’s simply not very visible compared to many other large industrial developments. Wytch Farm has found itself used by the onshore oil and gas industry as an example of how onshore oil and gas can have only a minor visual impact thanks to technology and engineering solutions, for example both well heads and the gathering station are nestled within forested areas.

Hidden in the trees: Wytch Farm.
Google Earth, CC BY-SA

Is this sort of oil field safe? How could we make sure there are no more spills like this?

As with all oil and gas operations there are hazards and risks. Developments consist of not just the wells, but also surface infrastructure to transport the produced fluids, and processing facilities. In Poole harbour, the leak reportedly occurred from a pipeline.

No further details have yet been provided, but it seems likely that this is related to one of several pipelines which connect the well heads, for example located on Furzey Island, to the main gathering station onshore.

It is unlikely that the oil and gas sector will ever be able to 100% rule out spills and leaks, particularly where there is ageing infrastructure. However specifically in the case of leaks from pipes, the industry has continued to look at ways to monitor pipelines and mitigate the likelihood of leaks. For example, companies are now using underwater remotely operated vehicles to inspect underwater pipelines.

Whose fault is this leak and whose responsibility is it to clean up?

Ultimately the company that operates the pipeline is responsible for ensuring that it is maintained. From the operator’s perspective, the UK government operates a dedicated incident reporting service that companies must use to report any release of oil or chemicals to the sea.

Perenco has said its “incident management team was activated immediately, the leak was stopped and booms deployed as an additional containment to protect Poole harbour. Perenco UK is working closely with the relevant authorities and a clean-up operation is under way.” The company has promised a full investigation to ascertain what happened.

In the case of Poole, the local Harbour Commissioners is the regulator responsible for coordinating the cleanup.

While initially the costs may be incurred by those coordinating the response, oil and gas producers who operate on the UK continental shelf are party to a voluntary agreement. This sees them take financial responsibility for any discharges of oil that occur as a result of exploration or production, and that any remedial measures are promptly reimbursed.

At this stage the focus is on the clear up, but the UK has in place the necessary legal framework to ensure that the financial responsibility is apportioned appropriately. Läs mer…

SVB’s newfangled failure fits a century-old pattern of bank runs, with a social media twist

The failure of Silicon Valley Bank on March 10, 2023, came as a shock to most Americans. Even people like myself, a scholar of the U.S. banking system who has worked at the Federal Reserve, didn’t expect SVB’s collapse.

Usually banks, like all companies, fail after a prolonged period of lackluster performance. But SVB, the nation’s 16th-largest bank, had been stable and highly profitable just a few months before, having earned about US$1.5 billion in profits in the last quarter of 2022.

However, financial history is filled with examples of seemingly stable and profitable banks that unexpectedly failed.

The demise of Lehman Brothers and Bear Stearns, two prominent investment banks, and Countrywide Financial Corp., a subprime mortgage lender, during the 2008-2009 financial crisis; the Savings and Loan banking crisis in the 1980s; and the complete collapse of the U.S. banking system during the Great Depression didn’t unfold in exactly the same way. But they had something in common: An unexpected change in economic conditions created an initial bank failure or two, followed by general panic and then large-scale economic distress.

The main difference this time, in my view, is that modern innovations may have hastened SVB’s demise.

Great Depression

The Great Depression, which lasted from 1929 to 1941, epitomized the public harm that bank runs and financial panic can cause.

Following a rapid expansion of the “Roaring Twenties,” the U.S. economy began to slow in early 1929. The stock market crashed on Oct. 24, 1929 – a date known as “Black Tuesday.”

The massive losses investors suffered weakened the economy and led to distress at some banks. Fearing that they would lose all their money, customers began to withdraw their funds from the weaker banks. Those banks, in turn, began to rapidly sell their loans and other assets to pay their depositors. These rapid sales pushed prices down further.

As this financial crisis spread, depositors with accounts at nearby banks also began queuing up to withdraw all their money, in a quintessential bank run, culminating in the failure of thousands of banks by early 1933. Soon after President Franklin D. Roosevelt’s first inauguration, the federal government resorted to shutting all banks in the country for a whole week.

These failures meant that banks could no longer lend money, which led to more and more problems. The unemployment rate spiked to around 25%, and the economy shrank until the outbreak of World War II.

Determined to avoid a repeat of this debacle, the government tightened banking regulations with the Glass-Steagall Act of 1933. It prohibited commercial banks, which serve consumers and small and medium-size businesses, from engaging in investment banking and created the Federal Deposit Insurance Corporation, which insured deposits up to a certain threshold. That limit has risen sharply over the past 90 years, from $2,500 in 1933 to $250,000 in 2010 – the same limit in place today.

The government created the FDIC to protect depositors from bank failures.

S&L crisis

The nation’s new and improved banking regulations ushered in a period of relative stability in the banking system that lasted about 50 years.

But in the 1980s, hundreds of the small banks known as savings and loan associations failed. Savings and loans, also called “thrifts,” were generally small local banks that mainly made mortgage loans to households and collected deposits from their local communities.

Beginning in 1979, the Federal Reserve began to hike interest rates very aggressively to fight the high inflation rates that had become entrenched.

By the early 1980s, Congress began allowing banks to pay market interest rates on depositers’ accounts. As a result, the interest rate S&Ls had to pay their customers was much higher than the interest income they were earning on the loans they had made in prior years. That imbalance caused many of them to lose money.

Even though about 1 in 3 S&Ls failed from around 1986 through 1992 – somewhere around 750 banks – most depositors at small S&Ls were protected by the FDIC’s then-$100,000 insurance limit. Ultimately, resolving that crisis cost taxpayers the equivalent of about $250 billion in today’s dollars.

Because the savings and loans industry was not directly connected to the big banks of that era, their collapse did not cause runs at the bigger institutions. Nevertheless, the S&L collapse and the government’s regulatory response did reduce the supply of credit to the economy.

As a result, the U.S. economy underwent a mild recession in the latter half of 1990 and first quarter of 1991. But the banking system escaped further distress for nearly two decades.

High inflation spurred failures of many small savings-and-loan banks in the 1980s.
Bettmann via Getty Images

Great Recession

Against this backdrop of relative stability, Congress repealed most of Glass-Steagall in 1999 – eliminating Depression-era regulations that restricted the scope of businesses that banks could engage in.

Those changes contributed to what happened when, at the start of a recession that began in December 2007, the entire financial sector suffered a panic.

At that time, large banks, freed from the Depression-era restrictions on securities trading, as well as investment banks, hedge funds and other institutions outside the traditional banking system, had heavily invested in mortgage-backed securities, a kind of bond backed by pooled mortgage payments from lots of homeowners. These bonds were highly profitable amid the housing boom of that era, and they helped many financial institutions reap record profits.

But the Federal Reserve had been increasing interest rates since 2004 to slow the economy. By 2007, many households with adjustable-rate mortgages could no longer afford to make their larger-than-expected home loan payments. That led investors to fear a rash of mortgage defaults, and the values of securities backed by mortgages plunged.

It wasn’t possible to know which investment banks owned a lot of these vulnerable securities. Rather than wait to find out and risk not getting paid, most of the depositors rushed to get their money out by late 2007. This stampede led to cascading failures in 2008 and 2009, and the federal government responded with a series of big bailouts.

The government even bailed out General Motors and Chrysler, two of the country’s three largest automakers, in December 2008 to keep the industry from going bankrupt. That happened because the major car companies relied on the financial system to provide potential car buyers with credit to purchase or lease new cars. But when the financial system collapsed, buyers could no longer obtain credit to finance or lease new vehicles.

The Great Recession lasted until June 2009. Stock prices plummeted by more than 50%, and unemployment peaked at around 10% – the highest rate since the early 1980s.

As with the Great Depression, the government responded to this financial crisis with significant new regulations, including a new law known as the Dodd-Frank Act of 2010. It imposed stringent new requirements on banks with assets above $50 billion.

Traders in Chicago watch stock index futures plunge on March 17, 2008.
Scott Olson/Getty Image

Close-knit customers

Congress rolled back some of Dodd-Frank’s most significant changes only eight years after lawmakers approved the measure.

Notably, the most stringent requirements were now reserved for banks with more than $250 billion in assets, up from $50 billion. That change, which Congress passed in 2018, paved the way for regional banks like SVB to rapidly expand with much less regulatory oversight.

But still, how could SVB collapse so suddenly and without any warning?

Banks take deposits to make loans. But a loan is a long-term contract. Mortgages, for example, can last for 30 years. And deposits can be withdrawn at any time. To reduce their risks, banks can invest in bonds and other securities that they can quickly sell in case they need funds for their customers.

In the case of SVB, the bank invested heavily in U.S. Treasury bonds. Those bonds do not have any default risk, as they are debt issued by the federal government. But their value declines when interest rates rise, as newer bonds pay higher rates compared with the older bonds.

SVB bought a lot of Treasury bonds it had on hand when interest rates were close to zero, but the Fed has been steadily raising interest rates since March 2022, and the yields available for new Treasurys sharply increased over the next 12 months. Some depositors became concerned that SVB might not be able to sell these bonds at a high enough price to repay all its customers.

Unfortunately for SVB, these depositors were very close-knit, with most in the tech sector or startups. They turned to social media, group text messages and other modern forms of rapid communication to share their fears – which quickly went viral.

Many large depositors all rushed at the same time to get their funds out. Unlike what happened nearly a century earlier during the Great Depression, they generally tried to withdraw their money online – without forming chaotic lines at bank branches.

Most of the SVB bank failure drama occurred online rather than in person.
John Brecher for The Washington Post via Getty Images

Will more shoes drop?

The government allowed SVB, which is being sold to First Citizens Bank, and Signature Bank, a smaller financial institution, to fail. But it agreed to repay all depositors – including those with deposits above the $250,000 limit.

While the authorities have not explicitly guaranteed all deposits in the banking system, I see the bailout of all SVB depositors as a clear signal that the government is prepared to take extraordinary steps to protect deposits in the banking system and prevent an overall panic.

I believe that it is too soon to say whether these measures will work, especially as the Fed is still fighting inflation and raising interest rates. But at this point, major U.S. banks appear safe, though there are growing risks among the smaller regional banks. Läs mer…

Brains also have supply chain issues – blood flows where it can, and neurons must make do with what they get

Neuroscientists have long assumed that neurons are greedy, hungry units that demand more energy when they become more active, and the circulatory system complies by providing as much blood as they require to fuel their activity. Indeed, as neuronal activity increases in response to a task, blood flow to that part of the brain increases even more than its rate of energy use, leading to a surplus. This increase is the basis of common functional imaging technology that generates colored maps of brain activity.

Scientists used to interpret this apparent mismatch in blood flow and energy demand as evidence that there is no shortage of blood supply to the brain. The idea of a nonlimited supply was based on the observation that only about 40% of the oxygen delivered to each part of the brain is used – and this percentage actually drops as parts of the brain become more active. It seemed to make evolutionary sense: The brain would have evolved this faster-than-needed increase in blood flow as a safety feature that guarantees sufficient oxygen delivery at all times.

Functional magnetic resonance imaging is one of several ways to measure the brain.

But does blood distribution in the brain actually support a demand-based system? As a neuroscientist myself, I had previously examined a number of other assumptions about the most basic facts about brains and found that they didn’t pan out. To name a few: Human brains don’t have 100 billion neurons, though they do have the most cortical neurons of any species; the degree of folding of the cerebral cortex does not indicate how many neurons are present; and it’s not larger animals that live longer, but those with more neurons in their cortex.

I believe that figuring out what determines blood supply to the brain is essential to understanding how brains work in health and disease. It’s like how cities need to figure out whether the current electrical grid will be enough to support a future population increase. Brains, like cities, only work if they have enough energy supplied.

Resources as highways or rivers

But how could I test whether blood flow to the brain is truly demand-based? My freezers were stocked with preserved, dead brains. How do you study energy use in a brain that is not using energy anymore?

Luckily, the brain leaves behind evidence of its energy use through the pattern of the vessels that distribute blood throughout it. I figured I could look at the density of capillaries – the thin, one-cell-wide vessels that transfer gases, glucose and metabolites between brain and blood. These capillary networks would be preserved in the brains in my freezers.

A demand-based brain should be comparable to a road system. If arteries and veins are the major highways that carry goods to the town of specific parts of the brain, capillaries are akin to the neighborhood streets that actually deliver goods to their final users: individual neurons and the cells that work with them. Streets and highways are built on demand, and a road map shows what a demand-based system looks like: Roads are often concentrated in parts of the country where there are more people – the energy-guzzling units of society.

In contrast, a supply-limited brain should look like the river beds of a country, which couldn’t care less about where people are located. Water will flow where it can, and cities just have to adjust and make do with what they can get. Chances are, cities will form in the vicinity of the main arteries – but absent major, purposeful remodeling, their growth and activities are limited by how much water is available.

This image shows astrocytes, a type of brain cell, contacting a ravinelike capillary.
Ed Reschke/Stone via Getty Images

Would I find that capillaries are concentrated in parts of the brain with more neurons and supposedly require more energy, like streets and highways built in a demand-based manner? Or would I find that they are more like creeks and streams that permeate the land where they can, oblivious to where the most people are, in a supply-driven manner?

What I found was clear evidence for the latter. For both mice and rats, capillary density makes up a meager 2% to 4% of brain volume, regardless of how many neurons or synapses are present. Blood flows in the brain like water down rivers: where it can, not where it is needed.

If blood flows regardless of need, this implies that the brain actually uses blood as it is supplied. We found that the tiny variations in capillary density across different parts of dead rat brains matched perfectly with the rates of blood flow and energy use in the same parts of other living rat brains that researchers measured 15 years prior.

Resolving blood flow and energy demand

Could the specific density of capillaries in each part of the brain be so limiting that it dictates how much energy that part uses? And would that apply to the brain as a whole?

I partnered with my colleague Doug Rothman to answer these questions. Together, we discovered that not only do both human and rat brains do what they can with what blood they get and typically work at about 85% capacity, but overall brain activity is indeed dictated by capillary density, all else being equal.

The reason why only 40% of the oxygen supplied to the brain actually gets used is because this is the maximum amount that can be exchanged as blood flows by – like workers trying to pick up items on an assembly line going too fast. Local arteries can deliver more blood to neurons if they start using slightly more oxygen, but this comes at the cost of diverting blood away from other parts of the brain. Since gas exchange was already near full capacity to begin with, the fraction of oxygen extraction seems to even drop with a slight increase in delivery.

From afar, energy use in the brain may look demand-based – but it really is supply-limited.

Blood supply influences brain activity

So why does any of this matter?

Our findings offer a possible explanation for why the brain can’t truly multitask – only quickly alternate between focuses. Because blood flow to the entire brain is tightly regulated and remains essentially constant throughout the day as you alternate between activities, our research suggests that any part of the brain that experiences an increase in activity – because you start doing math or playing a song, for example – can only get slightly more blood flow at the expense of diverting blood flow from other parts of the brain. Thus, the inability to do two things at the same time might have its origins in blood flow to the brain being supply-limited, not demand-based.

A better understanding of how the brain works could offer insights into human behavior and disease.
Peter Dazeley/The Image Bank via Getty Images

Our findings also offer insight into aging. If neurons must make do with what energy they can get from a mostly constant blood supply, then the parts of the brain with the highest densities of neurons will be the first to be affected when there is a shortage – just like the largest cities feel the pain of a drought before smaller ones.

In the cortex, the parts with the highest neuron densities are the hippocampus and entorhinal cortex. These areas are involved in short-term memory and the first to suffer in aging. More research is needed to test whether the parts of the brain most vulnerable to aging and disease are the ones with the greatest number of neurons packed together and competing for a limited blood supply.

If it’s true that capillaries, like neurons, last a lifetime in humans as they do in lab mice, then they may play a bigger role in brain health than expected. To make sure your brain neurons remain healthy in old age, taking care of the capillaries that keep them supplied with blood may be a good bet. The good news is that there are two proven ways to do this: a healthy diet and exercise, which are never too late to begin. Läs mer…

Candida auris: what you need to know about the deadly fungus spreading through US hospitals

A fungal superbug called Candida auris is spreading rapidly through hospitals and nursing homes in the US. The first case was identified in 2016. Since then, it has spread to half the country’s 50 states. And, according to a new report, infections tripled between 2019 and 2021. This is hugely concerning because Candida auris is resistant to many drugs, making this fungal infection one of the hardest to treat.

Candida auris is a yeast-type fungus that is the first to have multiple international health alerts associated with it. It has been found in over 30 countries, including the UK, since it was first identified in Japan in 2009.

It is related to other types of yeast that can cause infections, like Candida albicans which causes thrush. However, Candida auris is very different to these other fungi and in some ways, highly unusual.

First, it can grow, or “colonise”, human skin. Unlike many other Candida species that like to grow in our guts as part of the microbiome, Candida auris does not grow in this environment and seems to prefer the skin. This means that people who are colonised with Candida auris can shed lots of yeast from their skin, and this contaminates bed clothes and surfaces with the fungus. This can lead to outbreaks.

It is unusual for a fungal infection to spread from person to person, but that seems to be how Candida auris infections spread. Outbreaks can happen with this fungus, especially in intensive care units (ICU) and nursing homes where people are at a higher risk for getting fungal infections generally.

The fungus can live on surfaces for several weeks, and getting rid of it can be difficult. Enhanced cleaning and hand washing is needed to try and limit the spread of the fungus and exposure to patients who get ill from it.

Most people who are colonised with Candida auris will not get ill from it, or even know it is there. It causes infections when it gets into surgical wounds or the blood from an intravenous line. Once it gets into the body, it can infect organs and the blood causing a very serious and potentially fatal disease.

The mortality rate for people infected (as opposed to colonised) with the fungus is between 30 and 60%. But a precise mortality rate can be hard to pin down as people who are infected are often critically ill with other conditions.

Diagnosing an infection can be difficult as there can be a wide range of symptoms including fever, chills, headaches and nausea. It is for this reason that we need to keep a close eye on Candida auris as it can easily be confused with other conditions.

In the last few years, new tests to help identify this fungus accurately have been developed.

Candida auris can get into the body via an infected IV line.
Tyler Olson/Shutterstock

The first Candida auris infection was reported in the UK in 2013. However, there may have been other cases before this – there is evidence that some early cases were misidentified as unrelated yeasts.

The UK has so far managed to stop any major outbreaks, and most cases have been limited in their spread.

Most patients who have become ill from Candida auris in the UK had recently travelled to parts of the world where the fungus is more common or has been circulating for longer.

Spurred by COVID

Rising numbers of Candida auris infections are thought to be partially linked to the COVID pandemic. People who become very ill from COVID may need mechanical ventilation and long stays in the ICU, which are both risk factors for Candida auris colonisation and infection.

It will take some time to figure out exactly how the pandemic has affected rates and numbers of fungal infections around the world, but these are important questions to answer to help predict how Candida auris cases might fluctuate in the future.

As for most life-threatening fungal infections, treatment is difficult and limited. We have only a handful of antifungal drugs to fight these infections, so when a species is resistant to one or more of these drugs, the options for treatment are extremely limited. Some Candida auris infections are resistant to all three types of antifungal drug.

Healthcare professionals must remain vigilant to this drug-resistant fungus. Without close monitoring and enhanced awareness of this infection, we could see more outbreaks and serious disease associated with Candida auris in the future. Läs mer…

How UK broadcasting’s key principle of impartiality has been eroded over the years

A word that was bandied about freely in the wake of the Gary Lineker-BBC affair was “impartiality”. Apparently the gold standard of UK broadcasting, it was something that certain critics judged the BBC sports presenter to have breached in his personal social media posts.

Following Lineker’s suspension and subsequent reinstatement, a review of the BBC’s guidelines over its staff members’ use of social media is underway – not for the first time in the broadcaster’s recent history.

The UK has historically required broadcasters to abide by a set of “due impartiality” guidelines set out and policed by the UK’s broadcasting watchdog, Ofcom. These are designed to prevent the kind of partisanship that has long characterised American media.

Yet there is growing evidence UK broadcasters are effectively free to pursue a style of opinionated and partisan journalism familiar to viewers of US broadcast news and current affairs.

The public deserve more serious debate and scrutiny about the impartiality of broadcasters and how they are regulated.

The Foxification of news

In the US, between 1949 and 1987, broadcasters were required to adhere to the fairness doctrine. This helped to ensure reporting of politics and public affairs was broadly balanced.

As I explored in my book, Television Journalism, more opinionated formats in radio and then television news began to slowly emerge after the fairness doctrine was abolished. This was because broadcasters were no longer obliged to reflect different political perspectives.

In 1996, Fox News was launched by media tycoon Rupert Murdoch. It pursued a highly partisan brand of journalism that favoured conservative and Republican perspectives.

Over successive decades, this “Fox effect” paved the way for more partisanship in the US, with channels such as Newsmax and One American News adopting even more right-wing perspectives and sometimes even propagating conspiracy theories. For example, while Fox News initially questioned Donald Trump’s claims the 2020 presidential election had been rigged, the new hyper-partisan channels tended to legitimise his assertions of electoral fraud.

Fox News took a ratings hit, and chose to row back and endorse Trump’s conspiracy theories. It’s an editorial decision the channel is now defending in the courts after a US$1.6bn defamation lawsuit (£1.3 billion) was launched against it by Dominion Voting Systems.

Since the fairness doctrine was rescinded, the more partisan US media environment has not led to enhanced audience satisfaction. Instead it has coincided with high levels of Americans mistrusting news.

Foxification of UK broadcast news?

At the turn of the century, concerns about a so-called “Foxification of news” spread across the Atlantic. But a systematic analysis of Sky News and BBC News between 2004 and 2007 showed broadcasters were broadly conforming to rules about “due impartiality”.

Just a decade later, however, new broadcasters such as GB News, UK News, LBC and Times Radio, have pushed the boundaries of the UK’s rules on impartiality.

Read more:
News UK TV and GB News: new channels stoke fears of more partisan journalism

The new channels tend to deliver more opinionated and partisan journalism. Critics, for example, have highlighted GB News’s late-night opinion-based programming, and drawn attention to the channel’s dubious claims and conspiracy theories.

In March 2023, Ofcom found that the Mark Steyn programme on GB News was in breach of broadcasting rules. But of the 3,432 complaints Ofcom received about the channel up until that point, it concluded that the vast majority did not warrant further review.

Since GB News launched, Ofcom’s position has largely been to emphasise broadcasters’ freedom of expression. The regulator does not adopt a strict stopwatch approach to measuring journalistic balance.

Ofcom rules allow broadcasters to exercise a considerable degree of editorial discretion. For example, they can frame debates where both the presenter and any guests can voice strong views on topics such as immigration or Brexit.

This can lead to panel discussions with a highly partisan agenda and unbalanced mix of guests, such as when Conservative MPs Esther McVey and Phillip Davies interviewed the chancellor of the exchequer, fellow Conservative Jeremy Hunt.

Over recent months, there has been an increase in politicians presenting shows and interviewing guests from their own party. This trend has been evident in radio programmes for some time – including the Labour Party’s David Lammy’s show on LBC – and on TV channels such as GB News and UK News it is senior Conservative members that now dominate the airwaves.

Enhancing public confidence

Ofcom recently issued a clarification that politicians can present in “non-news” programming outside of election periods. This was defined as programming with “extensive discussion, analysis or interviews with guests – often live – and long-form video reports”.

In the case of GB News, this represents a significant part of its routine output – meaning much of the channel’s airtime is free to adopt a partisan perspective.

Crucially, however, broadcasters are required to air “alternative viewpoints”, with presenters posing critical questions, challenging or rebutting perspectives. How rigorously this regulation is being applied is open to debate.

There have been few instances of broadcasters breaching Ofcom’s code over recent years. “Alternative perspectives”, after all, can be softly and fleetingly delivered during a long segment. In other words, dubious claims may pass by with limited journalistic challenge.

Social media clips and comments regularly draw attention to so-called breaches of Ofcom’s code, revealing differences in how the regulator and members of the public interpret impartiality. In fairness to Ofcom, it has publicly explained that programmes are examined in their entirely once a complaint is officially logged with them.

By contrast, edited clips on social media can be highly partisan because they are not subject to rules on impartiality.

If the public is to remain confident in broadcast journalism, it is essential Ofcom is transparent about how it applies editorial standards of impartiality. Public support for impartiality remains high, and research also shows people expect broadcasters to be fair and balanced rather than opinionated and partisan.

Ahead of the next general election, voters need to have confidence not just in the broadcasters that inform them, but in the regulator that polices them. Läs mer…

The ONS has published its final COVID infection survey – here’s why it’s been such a valuable resource

March 24 marked the publication of the final bulletin of the Office for National Statistics’ (ONS) Coronavirus Infection Survey after nearly three years of tracking COVID infections in the UK. The first bulletin was published on May 14 2020 and we’ve seen new releases almost every week since.

The survey was based primarily on data from many thousands of people in randomly selected households across the UK who agreed to take regular COVID tests. The ONS used the results to estimate how many people were infected with the virus in any given week.

In the survey’s first six months, we had results from 1.2 million samples taken from 280,000 people. Although the number of people participating each month declined over time, the survey has continued to be a highly valuable tool as we navigate the pandemic.

In particular, because the ONS bulletins were based on surveying a large, random sample of all UK residents, it offered the least biased surveillance system of COVID infections in the UK. We are not aware of any similar study anywhere else in the world. And, while estimating the prevalence of infections was the survey’s main output, it gave us a lot of other useful information about the virus too.

Unbiased surveillance

An important advantage of the ONS survey was its ability to detect COVID infections among many people who had no symptoms, or were not yet displaying symptoms.

Certainly other data sets existed (and some continue to exist) to give a sense of how many people were testing positive. For example, earlier in the pandemic, case numbers were reported at daily national press conferences. Figures continue to be published on the Department of Health and Social Care website.

But these totals have usually only encompassed people who tested because they had reason to suspect they may have been infected (for example because of symptoms or their work). We know many people had such minor symptoms that they had no reason to suspect they had COVID. Further, people who took a home test may or may not have reported the result.

Similarly, case counts from hospital admissions or emergency room attendances only captured a very small percentage of positive cases, even if many of these same people had severe healthcare needs.

Symptom-tracking applications such as the ZOE app or online surveys have been useful but tend to over-represent people who are most technologically competent, engaged and symptom-aware.

Testing wastewater samples to track COVID spread in a community has proved difficult to reliably link to infection numbers.

Read more:
The tide of the COVID pandemic is going out – but that doesn’t mean big waves still can’t catch us

What else the survey told us

Aside from swab samples to test for COVID infections, the ONS survey collected blood samples from some participants to measure antibodies. This was a very useful aspect of the infection survey, providing insights into immunity against the virus in the population and individuals.

Beginning in June 2021, the ONS survey also published reports on the “characteristics of people testing positive”. Arguably these analyses were even more valuable than the simple infection rate estimates.

For example, the ONS data gave practical insights into changing risk factors from November 21 2021 to May 7 2022. In November 2021, living in a house with someone under 16 was a risk factor for testing positive but by the end of that period it seemed to be protective. Travel abroad was not an important risk factor in December 2021 but by April 2022 it was a major risk. Wearing a mask in December 2021 was protective against testing positive but by April 2022 there was no significant association.

We shouldn’t find this changing picture of risk factors particularly surprising when concurrently we had different variants emerging (during that period most notably omicron) and evolving population resistance that came with vaccination programmes and waves of natural infection.

Also, in any pandemic the value of non-pharmaceutical interventions such wearing masks and social distancing declines as the infection becomes endemic. At that point the infection rate is driven more by the rate at which immunity is lost.

The survey gave us insights into the protection offered by vaccines and non-pharmaceutical interventions.
Paul Maguire/Shutterstock

The ONS characteristics analyses also offered evidence about the protective effects of vaccination and prior infection. The bulletin from May 25 2022 showed that vaccination provided protection against infection but probably for not much more than 90 days, whereas a prior infection generally conferred protection for longer.

After May 2022, the focused shifted to reinfections. The analyses confirmed that even in people who had already been infected, vaccination protects against reinfection, but again probably only for about 90 days.

It’s important to note the ONS survey only measured infections and not severe disease. We know from other work that vaccination is much better at protecting against severe disease and death than against infection.

Read more:
How will the COVID pandemic end?

A hugely valuable resource

The main shortcoming of the ONS survey was that its reports were always published one to three weeks later than other data sets due to the time needed to collect and test the samples and then model the results.

That said, the value of this infection survey has been enormous. The ONS survey improved understanding and management of the epidemic in the UK on multiple levels. But it’s probably appropriate now to bring it to an end in the fourth year of the pandemic, especially as participation rates have been falling over the past year.

Our one disappointment is that so few of the important findings from the ONS survey have been published in peer-reviewed literature, and so the survey has had less of an impact internationally than it deserves. Läs mer…

An introduction to the work of Kenzaburō Ōe in five books

Kenzaburō Ōe, the last of Japan’s great post second world war Japanese writers, died in early March. When he was awarded the Nobel prize in 1994, he said that as a novelist he wished to “enable both those who express themselves with words and their readers to recover from their own sufferings and the sufferings of their time, and to cure their souls of the wounds”.

He wrote on taboo themes in Japan such as disability through his life with his son Hikari, who was born with a herniated brain, autism and epilepsy. He wrote about the dangers of nuclear weapons and the aftermath of Hiroshima and about the communities and folklore of his native rural island Shikoku.

He portrayed human nature in all its aspects, even the most cruel, with great inventiveness. In the words of his English translator John Nathan, his works feature a “language all his own, a language which can accommodate the virulence of his imagination”.

Here is a list of five books to help you navigate Ōe’s writings.

1. A Personal Matter (1964)

Grove Atlantic

Possibly the best known of Ōe’s novels, it follows the narrator “Bird” as he faces a personal crisis after his son is born with a brain herniation requiring immediate surgery. The novel explores, often with brutal sincerity, the conflict of a man unsure whether to let the child die or to coexist with it, thus giving up his dreams of an exotic life.

With this story (and in many others thereafter), Ōe breaks with the traditional Japanese form of the confessional, autobiographical I-novel. While inspired by his own son’s birth, Ōe distances himself from Bird and portrays the crisis of a man in connection with the universal theme of dealing with fatality and the inner demons it foregrounds.

2. Hiroshima Notes (1965)

In this essay collection, Ōe recounts his visits to Hiroshima beginning in the summer of 1963, when he was hired to write a report on a rally to abolish nuclear weapons. With his usual commitment to respecting human rights and suffering, the writer draws an often bleak portrait of how political factions appropriate victims’ traumas and subsume tragedy under political slogans.

Based on interviews with survivors, but also with the doctors and nurses who cared for them, Ōe’s accounts reveal the magnitude of the horrific bombing, which had long-lasting repercussions for decades after the events.

The question asked throughout is: “Did the Japanese really learn anything from the defeat of 1945?” Hiroshima Notes is a heartfelt cry to use the lessons of past mistakes to learn to respect human life, including the victims’ right to silence. Ōe’s opposition to nuclear weapons remained unwavering for all his life.

3. The Silent Cry (1967)

Ivy Books

In this novel, which Ōe considered his most successful, two brothers return to their native village in Shikoku to sell their family home. There, their lives change as a result of revelations about repressed feelings and histories of violence, including present riots that echo the local uprisings which had involved their ancestors.

Through an elaborate structure, Ōe moves between three temporal planes, the turbulent years of peasant riots before the Meiji Restoration in 1868, the aftermath of Japan’s surrender in 1945 and the narrative present, reminiscent of the student protests of the 1960s. The rural setting uniting all of them is a mythical site where reality and local legends coalesce, offering a powerful reflection on the relationship between history, community, and memory.

4. The Changeling (2000)

This novel is the author’s attempt to come to terms with the death of his brother-in-law, the world-renowned film director Jūzō Itami, who allegedly died by suicide. In this semi-autobiographical narrative, Ōe’s fictional alter ego Kogito Chōkō enters in an asynchronous conversation with the tapes his brother in-law had recorded before his passing. Their discussions on art, life, and friendship make Kogito reflect on the possible causes of the suicide.

The Changeling proposes important reflections on how death affects those that are left behind. It is a moving tale about processing grief and of the possibilities for healing. It also includes possibly the most touching defence of education ever written.

5. Death by Water (2009)

Atlantic Books

Kogito Chōkō, now in his 70s, returns to his native Shikoku to finally write the novel about the truth of his father’s mysterious death at the end of the war. The fragmented recollections about the man, who had been involved with ultra-nationalistc reactionaries, motivate a critical reflection on the multiplicity of memory. Personal and local stories, intertwined with folklore, may appear in contrast with each other but they are nonetheless integral pieces of the complexity of human life.

Ōe’s novel, whose title is a reference to English poet’s TS Eliot’s The Wasteland, examines how people have to continue to live with traumatic events such as loss and rape but also the preoccupation of an elderly father (Kogito/Ōe) to leave his disabled son alone after his death.

Among the books in this list, female characters have the most prominent role here. Kogito’s wife, his sister and the young actress Unaiko all represent different generations and professions, eliciting considerations on the place of women in contemporary Japan. Läs mer…

Why economic growth alone will not make British society fairer or more equal

In presenting his spring budget, the British chancellor, Jeremy Hunt, claimed his plans are all about growth. For the leader of the opposition, Keir Starmer, growth is his key “mission”.

Higher growth – of the right kind – is a desirable goal. But Britain has a dismal record on this front. The rate of economic growth in the UK has slowed sharply since the millennium. Today it is lower than those of other rich nations.

However, this is only one of the multifaceted problems the country faces. Levels of poverty are double those of the 1970s. The income gap between rich and poor is wider than in nearly all other countries. And key public services have been starved of resources.

Neither the Labour party nor the Conservatives have much to say about these multifaceted crises, nor how to tackle them. The message seems to be that without faster growth, little can be done – a stance which recent history shows is far from the solution to either rising poverty or social frailty.

My research shows that the gains from economic activity in recent decades have been increasingly captured by a small, rich elite, while many post-war social gains have been reversed. This has been greatly exacerbated by rolling austerity measures since 2010.

Chancellor Jeremy Hunt delivering his budget on March 15 2023.

Personal enrichment

Britain’s pro-rich, anti-poor bias is central to its broken economy and fractured society. In recent decades, key determinants of national strength – rates of innovation, investment, labour force skills and the quality of social support – have lagged those of our competitors.

A primary, if not the only reason for this failure, has been the way business activity, too often aided by misplaced state policies, has been increasingly geared to quick personal enrichment. This process of “corporate extraction” by a small elite has come at the expense of the long-term wealth creation that would boost economic resilience and serve the common good.

Neoliberal economists claim that weaker state regulation makes markets more competitive. However, more relaxed rules hand greater freedoms to boardrooms, enabling them to consolidate corporate power. Key markets, from banking and audit to pharmaceuticals and housebuilding, are now dominated by a few narrowly owned and controlled companies.

Many large corporations have been turned into cash cows for owners and executives. Boardrooms have adopted anti-competitive devices, from killing off rivals to price collusion. This is a return of what the American economist Thorstein Veblen termed “market sabotage” over a century ago.

Such practices crowd out the kind of innovation that offers greater social value. Since the Victorian era, they have been a central driver of Britain’s low wage, low productivity and high poverty economy. And their return in recent decades has become a key barrier to social and economic progress.

Instead of private investment and wages being boosted, the rising profits of recent times – which have continued to grow during the pandemic – have been siphoned off in disproportionate payments to shareholders and executives.

A 2019 report from the Trades Union Congress reported that three-quarters of the profits of FTSE 100 companies were returned to shareholders in buy-backs and dividends in the four years from 2015.

With UK corporations increasingly owned by overseas institutional investors – notably US asset management firms – little of this flow has ended up in UK pension and insurance funds or been fed back into the domestic economy.

Leader of the opposition Keir Starmer delivers his growth speech at the office of UK Finance in central London on February 27 2023.
PA Images/Alamy

Wealth creation versus appropriation

In 1896, the influential Italian economist Vilfredo Pareto distinguished between “value-added activity” that brings gains across society and “extractive” or “appropriative” business practices that benefit a powerful minority.

Appropriation was commonplace in the 19th century. With the return of concentrated power, such practices have once again become dominant. These include the rigging of financial and product markets and the skimming of returns from financial transactions.

Consortiums of private-equity investors seeking fast and inflated returns have taken over many publicly listed companies (from motoring group the AA, to retailers Topshop, Debenhams and Morrisons, to name a few). In many cases, including Debenhams and Topshop owner Arcadia Group, this has weakened long-term viability.

Key public services are now undergoing similar treatment. Social care, once provided largely by public agencies, has become a key target for the private buy-out industry. As a result, significant proportions of public money are being effectively siphoned off by the new providers.

These trends have had a mostly damaging effect on the way society functions. An important effect of the process of wealth accumulation, for example, has been the diversion of resources from meeting the basic needs of all citizens to feed the lifestyles of the wealthiest.

As a result we are seeing the reappearance of what the American economist JK Galbraith once called “private affluence and public squalor”. Since 2010, at least 1,000 Sure Start childcare and family services centres have folded in England. Cuts in council spending have led to the loss of more than 4,500 youth worker jobs.

Compare britain’s low level of social investment with the surging demand for private jets. Land that could be used to build social housing has been swallowed up by luxury developments.

Post-war social reforms

When Clement Attlee became UK prime minister in 1945, his Labour government inherited a society shattered by war. The public was hungry for change. Heeding economist and social reformer William Beveridge’s 1942 warning that Britain needed “more than patching”, he ignored the nation’s historic debt crisis (the result of paying for the war).

Instead, he launched an unprecedented programme of social spending that took priority over boosting private consumption. Ground-breaking and popular reforms included the National Health Service, a comprehensive, compulsory and universal system of national insurance, and family allowance benefits.

The strategies of both main political parties today contain a central contradiction. Higher growth alone, even if it can be delivered, will not bring a stronger, fairer and more equal society. That requires a transformative plan to tackle the way in which so much of modern business strategy drives inequality. As in 1945, this means more than “patching”. Läs mer…

Shedding pounds might benefit your heart even if some weight is regained – new study

Programmes to help people lose weight through changes to their diet, exercise or both, are mainstays of weight management. Despite their widespread use, many people worry that after the programmes end they will regain the weight they lost – or more – removing the health benefits.

To understand the effects of this, we brought together 249 studies including 60,000 adults who were overweight or living with obesity. We compared the half who joined programmes to lose weight through diet or exercise, or both, with the half who had no support (or less support than offered on these programmes). We assessed what happened to people’s weight after the programmes ended, and what this meant for their physical and mental health.

As expected, people lost weight during the programmes. There was a lot of variability, but, on average, people weighed 5kg less at the end of the programme than they did at the start. In most studies of weight loss programmes, people who don’t receive support (those in the “control group”) also lose weight – these are people who not only want to lose weight, but have volunteered to be in a study to help them lose weight, which means they are highly committed. For this reason, we use these “control groups” as a way to test the effect of the programme itself. On average, people assigned to a programme lost 2kg more than people in the control groups.

People who had been assigned to a diet and exercise programme gradually regained weight when the programme ended. Typically, it took at least five years to regain the weight that was lost during the weight loss programme – few studies followed people for more than five years. Some studies stopped at a point where people still hadn’t regained all the weight they had lost.

We showed that weight loss led to improvements in risk factors for heart disease, including high blood pressure, high cholesterol, blood glucose and type 2 diabetes. On average, these reductions were small, but translated across a population, would lead to significant reductions in disease.

Later, as weight was regained, these improvements ebbed away. But even five years, later some benefits were still apparent. There was some evidence that the chances of developing diabetes or having a heart problem were reduced. But most trials didn’t follow people for long enough to be certain. There is some evidence that short-term reduction in risk factors can lead to long-term reductions in disease incidence.

On average, people who went on diet and exercise programmes improved their quality of life, but around two years after the programme ended, their quality of life was similar to people who did not go on a programme.

Weight loss led to improvements in blood pressure.

Forty-seven studies also looked at the effect of these programmes on mental health. Overall, there was no evidence that these programmes made anxiety or depression worse, either during the programme or after it ended. There was some evidence that anxiety and depression might be improved, particularly in programmes that combined diet and exercise, rather than ones that focused on diet alone. We need more studies to be sure about these findings.

We also looked at whether anything made people more or less likely to regain weight. There was a lot of variation in how much weight was lost and how quickly weight was regained. On average, the more weight people lost through diet and exercise programmes, the quicker they regained it after the programme ended. However, this faster rate of weight regain did not wipe out the initial weight loss for at least five years.

Programmes that paid people to lose weight tended to lead to faster weight regain once the payments stopped than for people who did not receive any financial incentives. Programmes that continued to be available outside of the study were linked to less weight regain. This might be because people could keep attending the programme for as long as they wanted, or engage with it again when they started regaining weight. This included weight management programmes provided in the community.

In our review, we didn’t look at weight regain after using other weight loss methods, like medications. There is some evidence from individual studies that weight regain after weight-loss medication is stopped may be faster than after diet and exercise programmes.

Benefits still accrue, despite weight regain

Weight regain isn’t inevitable, but it is very common. Some people keep off most of the weight they lose in the long term, but it’s hard to predict who this will be. No one sets out on a weight loss attempt wanting to regain weight, but our genes and the environment we live in make it harder for some people to keep off weight than others.

Nonetheless, our findings provide some reassurance that taking part in a diet and exercise programme to lose weight can benefit people’s health – even if they put the weight back on. Increasingly, obesity is thought of as a chronic, relapsing condition – one that may need repeated periods of treatment to reduce health risks in the long term. Läs mer…

A train collision, a toxic derailment and thousands killed in an earthquake: How cost-cutting led to disaster in Turkey, Greece and Ohio

A train collision in Greece, a toxic derailment in Ohio, and thousands killed under collapsed buildings in Turkey – February 2023 was a month marked by tragedy across the world. What unites those affected by these disasters is a feeling of injustice and a demand for change within the governments and regulatory bodies that they see as responsible. As blame is passed around, the people of these nations see a common theme: cost-cutting, profiteering and a lack of respect for workers rights and civilian safety. 
As the dust settles after a month of tragedy across the world, those left grieving the loss of family, friends and fellow citizens, and fearing what the future holds, are questioning who is to blame for their loss. Residents of Turkey and Syria watched as buildings toppled and crumbled around them, people in East Palestine, Ohio, were left fearing the air they breathe, and Greeks awoke to the tragic news of a train collision which took the lives of dozens. These disasters are varied and spread across Europe, Asia, and North America, but they are linked by a common theme. They were – largely, if not entirely – avoidable. In each affected country, people have come out to protest what they see as government and regulatory failings, blaming a thirst for profit and the cost-cutting which goes along with it.
Neglected infrastructure and outdated equipment in Greece
Last month, in the Thessaly region of Greece, a passenger train and a freight train collided, resulting in the deaths of at least 57 people – many of whom were young students. This tragedy whipped up great anger among Greeks and particularly young people who had lost fellow students and felt a sense of solidarity with those who had been the victim of, as they see it, governmental failings.
Protests broke out as the news hit the population, soundtracked by calls of “murderers!” towards the officials and the centre right government who many Greeks hold responsible. Metro and rail workers immediately organised a strike through their unions, showing anger at a problem that had been placed at their door through years of apathy and neglect directed at their sector.
Those who wish to deflect from the government’s responsibility claim that the tragedy was the result of human error, which is reflected in the arrest of the stationmaster responsible for the affected section of railroad on the night of the collision. But as the unions see it, the people involved were never given a fair chance at operating safely. The stationmaster, Vassilis Samaras, according to his lawyer, shares this view. As they see it, they are partly responsible, but were working under difficult conditions – he was the only staff member responsible for the region as his colleagues had already gone home – and with a barely functioning signalling system.
Protesters highlight outdated rail infrastructure in the Mediterranean country (Photo: Nick Night / Unsplash)
Protesters and unions have called the government out on staff shortages, outdated equipment and underfunded infrastructure, with the overarching problem of cost-cutting at every opportunity. One such protester, Stelios Dormarazoglou, explained how he understood the disaster:
“Everyone knows that if the Greek state had wanted, this accident could have been prevented. My own son worked on upgrading the signalling system – nine years ago. Ever since it’s been stalled because companies are only ever interested in profits.”
The Greek president, Katerina Sakellaropoulou, has pledged to do all she can to modernise the Greek railway system and introduce automated safety systems, but for many Greeks this is too little too late.
Overworked railroad workers in Ohio
On February 3, thirty-eight cars of a Norfolk Southern freight train passing through East Palestine – eleven of which were carrying hazardous materials – derailed and ignited into a 48-hour-long blaze. This resulted in toxic and carcinogenic materials being pumped into the air and seeping into the ground and waterways. While people within a one-mile radius were evacuated, this is seen as a feeble response to an environmental disaster which should never have taken place.
The bulk of health concerns from residents of East Palestine and the surrounding area relate to the release of vinyl chloride into the surrounding environment. It is reported that upwards of 40,000 fish and animals have died as a result, including family pets as far as 10 miles away. While no people died directly as a result of the derailment, residents of the town report rashes, headaches and coughs, and live in a state of anxiety surrounding the long-term health and environmental consequences of the pollution.
[embedded content]
Echoing the accusations made by those in Greece, residents of the area and railroad workers argue that this disaster was avoidable and was the result of underfunding, overworking, and a lack of respect for safety regulations. Ron Kaminkow, the general secretary of Railroad Workers United, made clear his feelings of where responsibility lies:
“Without a change in the working conditions, without better scheduling, without more time off, without a better work-life balance, the railroad is going to suffer… it’s just intrinsic, with short staffing. Corners get cut and safety is compromised.”
Between 2018 and 2020, railroad jobs were cut by 40,000. This added great strain to already overworked employees, not helped by the fact that they receive no paid sick leave, forcing workers to either work through illness or lose wages. They can also be disciplined and eventually let go simply for taking time off. This added stress for the workers is even more insulting, as the six main railroad companies in the United States reported $22 billion in profits over 2022.
The toxic cocktail of working through illness, punitive measures for time off, increased workload due to staff cuts, and the resulting low morale, means railroad workers are far from being able to do their job to the required standard. This, evidently, can result in disaster when working with dangerous cargo.
Leo McCann, chair of the rail labor division of transportation trades department, summed up the general feeling, saying:
“The railroads are more interested in profitability and keeping their return on investment up and their numbers down so they can satisfy Wall Street, and they just live behind this shield hoping nothing will happen.”
Buildings crumble and collapse in Turkey
While nothing can be done to prevent the occurrence of an earthquake, the Turkish authorities were not naïve as to the inevitability of such an earthquake taking place. The nation, which straddles the European and Asian continents, is the meeting place of three tectonic plates: the African Plate, the Arabian Plate and the Anatolian Plate. This leaves the area highly vulnerable to severe earthquakes.
85,000 buildings collapsed as a result of the magnitude 7.8 earthquake, claiming almost 50,000 lives and injuring and additional 115,000 in Turkey. As the initial shock settled and rescue efforts began, people started to wonder why some buildings collapsed while others stood and saved those within.
Many nations, such as Japan, who are plagued with the same problem take strict measures to minimize destruction and casualties, mostly through building regulations which require contactors to construct earthquake-proof buildings. This was also the case in Turkey until 2019 when the Erdogan government retroactively legalised thousands of buildings which did not meet earthquake construction standards. In order to avoid adapting these substandard buildings and ignore regulations for new buildings, owners and contractors had only to pay a fine to the Turkish government, putting money before the lives of thousands.
Around 75,000 buildings in the earthquake zone had been affected by this change in law, and when the disaster struck many crumbled under the stress, leaving those within or passing by trapped, injured, or dead. Turkish engineers and architects had warned that this relaxation of the law was putting lives in danger, but they were ignored, and their voices drowned out by those who saw only economic growth.
The lax regulations and resulting tragedy are not just the result of profiteering, they are also due to a hunger for political power. A large part of Erdogan’s electoral success is down to his promise of more jobs and more homes for the Turkish people via a massive construction drive. But many of the country’s residents did not receive the homes that were promised – thanks to an obsession with profit and growth and the resulting neglect of regulations and building standards, they received tombs. Läs mer…

’QBism’: quantum mechanics is not a description of objective reality – it reveals a world of genuine free will

What does quantum mechanics, the most successful theory ever proposed by physics, teach us about reality? The starting point for most philosophers of physics is that quantum mechanics must somehow provide a description of the world as it is independently of us, the users of the theory.

This has led to a large number of incompatible worldviews. Some believe the implication of quantum mechanics is that there are parallel worlds as in the Marvel Comic universe; some believe it implies signals that travel faster than light, contradicting all that Einstein taught us. Some say it implies that the future affects the past.

According to QBism, an approach developed by Christopher Fuchs and me, the great lesson of quantum mechanics is that the usual starting point of the philosophers is simply wrong. Quantum mechanics does not describe reality as it is by itself. Instead, it is a tool that helps guide agents immersed in the world when they contemplate taking actions on parts of it external to themselves.

This is article is accompanied by a podcast series called Great Mysteries of Physics which uncovers the greatest mysteries facing physicists today – and discusses the radical proposals for solving them.

The use of the word “agent” rather than the familiar “observer” highlights that quantum mechanics is about actions that participate in creating reality, rather than observations of a reality that exists independently of the agent.

QBism and its homophone, the art movement Cubism, share the understanding that reality is more than what a single agent’s perspective can capture. However, unlike the art movement, QBism does not attempt to represent reality. It does not attempt to bring the different perspectives together in one “third-person” view. QBism is fundamentally anti-representational and first person.

Rescuing free will

This puts QBism in direct contradiction with the two pillars of the 19th-century conception of a mechanistic universe. One is that nature is governed by physical laws in the same way that a mechanical toy is governed by its mechanism. The other is that it is, in principle, possible to have an objective view of the universe from the outside – from a God’s eye or third-person standpoint.

This mechanistic vision is still dominant among 21st-century scientists. For instance, in their 2010 book The Grand Design, Stephen Hawking and Leonard Mlodinow write: “It is hard to imagine how free will can operate if our behaviour is determined by physical law, so it seems that we are no more than biological machines and that free will is just an illusion.”

Instead, the QBist vision is that of an unfinished universe, of a world that allows for genuine freedom, a world in which agents matter and participate in the making of reality.

A key aspect of quantum mechanics is randomness. Rather than making firm predictions, quantum mechanics is concerned with the probabilities for potential measurement outcomes. The physicist Ed Jaynes famously expressed that to understand quantum mechanics, one has to understand probability first.

Frank Ramsey, one of the originators of the personalist Bayesian approach.
wikipedia, CC BY-SA

In this spirit, QBism’s starting point is the personalist Bayesian approach to probability (originally a method of statistical inference and now a fully fledged theory of decision making under uncertainty). In this approach, probabilities are an agent’s personal degrees of belief.

So rather than describing the statistics of some experiment, probabilities provide guidance to agents on how they should act. In other words, probabilities are not descriptive but “normative” – analogous to an instruction manual. It turns out that the standard probability rules can be derived from the (normative) principle that one’s probabilities should fit together in a way that guards against a sure loss when used for making decisions.

QBism’s great insight was that the probabilities that appear in quantum mechanics are no different. They are not, as in the standard view, fixed by physical law, but express an agent’s personal degrees of belief about the consequences of measurement actions the agent is contemplating.

In QBism, the role of the quantum laws is to provide extra normative principles about how an agent’s probabilities should fit together. Rather than providing a description of the world, the rules of quantum mechanics are an addition to the standard probability rules; to classical (non-quantum) decision theory. They assist physicists in decisions such as how to design a quantum computer in order to minimise the probability of error, or what atoms to use in an atomic clock in order to increase the precision of time measurements.

Measurements are actions

Just like “observer”, the term “measurement” can be misleading because it suggests a pre-existing property that is revealed by the measurement. Instead, a measurement should be thought of as an action an agent takes to elicit a response from the world. A measurement is an act of creation that brings something entirely new into the world, an outcome that is shared between the agent and the agent’s external world.

Quantum mechanics is often depicted as “weird” and hard, or indeed impossible, to understand. As a matter of fact, the weirdness of quantum mechanics is an artefact of looking at it the wrong way. Once the two main QBist insights – that the quantum rules are guides to action and that measurements do not reveal pre-existing properties – are taken on board, all quantum paradoxes disappear.

Take Schrödinger’s cat, for example. In the usual formulation, the unfortunate animal is described by a “quantum state” taken to be a part of reality and implying that the cat is neither dead nor alive.

The QBist, by contrast, does not regard the quantum state as a part of reality. The quantum state a QBist agent might assign has no bearing on whether the cat is alive or dead. All it expresses is the agent’s expectations concerning the consequences of possible actions they might take on the cat. Unlike most interpretations of quantum mechanics, QBism respects the fundamental autonomy of the cat.

Or take quantum teleportation. According to a common way of presenting this operation, a particle’s quantum state, again regarded as a part of reality, disappears at one place (A) and mysteriously reappears at another (B) – quite
literally as in a transporter in the Star Trek science fiction series.

For a QBist, however, nothing real is transported from A to B. All that happens in quantum teleportation is that an agent’s belief about the particle at A becomes, after the operation, the same agent’s belief about a particle at B. The quantum state that expresses the agent’s belief about the particle at A initially is mathematically identical to the quantum state that expresses that same agent’s belief about the particle at B after the operation. Quantum teleportation is a powerful tool used in applications such as quantum computing, but in QBism there is nothing counter-intuitive or weird about it.

QBism is an ongoing project. It spells out clearly the meaning of all mathematical objects in the theory and is thus a fully developed interpretation of quantum mechanics. Yet, QBism is also a programme for developing new physics and has already yielded deep insights even if it is still a work in progress.

QBism has also led to a fruitful dialogue with the kindred philosophical schools of thought of pragmatism and phenomenology. Its vision of the world is one in which agents possess genuine freedom and respect each other’s autonomy. I like to think that this is what quantum mechanics has been trying to tell us about reality all along. Läs mer…

Great Mysteries of Physics 4: does objective reality exist?

It is hard to shake the intuition that there’s a real and objective physical world out there. If I see an umbrella on top of a shelf, I assume you do too. And if I don’t look at the umbrella, I expect it to remain there as long as nobody steals it. But the theory of quantum mechanics, which governs the micro-world of atoms and particles, threatens this commonsense view.

The fourth episode of our podcast Great Mysteries of Physics – hosted by me, Miriam Frankel, science editor at The Conversation, and supported by FQxI, the Foundational Questions Institute – is all about the strange world of quantum mechanics.

According to quantum theory, each system, such as a particle, can be described by a wave function, which evolves over time. The wave function allows particles to hold multiple contradictory features, such as being in several different places at once – this is called a superposition. But oddly, this is only the case when nobody’s looking.

Although each potential location in a superposition has a certain probability of appearing, the second you observe it, the particle randomly picks one – breaking the superposition. Physicists often refer to this as the wave function collapsing. But why should nature behave differently depending on whether we are looking or not? And why should it be random?

Not everyone is worried. “If you want to explain everything we can observe in our experiments without randomness, you have to go through some really weird and long-winded explanations that I am much more uncomfortable with,” argues Marcus Huber, a professor of quantum information at the Technical University of Vienna. And indeed, you can get rid of randomness if you accept that the future can influence the past, that there’s more than one outcome to every measurement or that everything in the universe is predetermined since the dawn of time.

Another problem is that quantum mechanics seems to give rise to contradictory facts. Imagine a scientist, Lisa, inside a lab measuring the location of a particle. Before her colleague, Nikhil, knocks on the lab door and asks what outcome she saw, he would measure Lisa as being in a superposition of both branches – one where she sees the particle here and one where she sees the particle there. But at the same time, Lisa herself may be convinced that that she has a definite answer as to where the particle is.

That means that these two people will say that the state of reality is different – they’d have different facts about where the particle is.

There are may other oddities about quantum mechanics, too. Particles can be entangled in a way that enables them to somehow share information instantaneously even if they’re light years apart, for example. This challenges another common intution: that objects need a physical mediator to interact.

Physicists have therefore long debated how to interpret quantum mechanics. Is it a true and objective description of reality? If so, what happens to all the possible outcomes that we don’t measure? The many worlds interpretation argues they do happen – but in parallel universes.

Another set of interpretations, collectively known as the Copenhagen interpretation, suggests quantum mechanics is to some extent a user’s manual rather than a perfect description of reality. “The Copenhagen interpretations what they share is at least a partial step back from the full-blown descriptive aim of physics,” explains Chris Timpson, a philosopher of physics at the University of Oxford. “So the quantum state, this thing which describes these lovely superpositions, that’s just a tool for making predictions about the behaviour of macroscopic measurement scenarios.”

But why don’t we see quantum effect on the scale of humans? Chiara Marletto, a quantum physicist at the University of Oxford, has developed a meta-theory called constructor theory which aims to encompass all of physics based solely on simple principles about which physical transformations in the universe are ultimately possible, which are impossible, and why.

She hopes it can help us understand why we don’t see quantum effects on the macroscopic scale of humans. “There’s nothing [in the laws of physics] that says it’s impossible to have quantum effects at the scale of a human being,” she says. “So either we discover a new principle that says that they really are impossible – which would be interesting – or in the absence of that, it is more a question of trying harder to create conditions in the laboratory to bring these effects about.”

Another problem with quantum mechanics is that it isn’t compatible with general relativity, which describes nature on the largest of scales. Marletto is using constructor theory to try to find ways to combine the two. She has also come up with some experiments which could test such models – and rule out certain interpretations of quantum mechanics.

You can listen to Great Mysteries of Physics via any of the apps listed above, our RSS feed, or find out how else to listen here. You can also read a transcript of the episode here. Läs mer…

Great Mysteries of Physics: does objective reality exist?

It is hard to shake the intuition that there’s a real and objective physical world out there. If I see an umbrella on top of a shelf, I assume you do too. And if I don’t look at the umbrella, I expect it to remain there as long as nobody steals it. But the theory of quantum mechanics, which governs the micro-world of atoms and particles, threatens this commonsense view.

The fourth episode of our podcast Great Mysteries of Physics – hosted by me, Miriam Frankel, science editor at The Conversation, and supported by FQxI, the Foundational Questions Institute – is all about the strange world of quantum mechanics.

According to quantum theory, each system, such as a particle, can be described by a wave function, which evolves over time. The wave function allows particles to hold multiple contradictory features, such as being in several different places at once – this is called a superposition. But oddly, this is only the case when nobody’s looking.

Although each potential location in a superposition has a certain probability of appearing, the second you observe it, the particle randomly picks one – breaking the superposition. Physicists often refer to this as the wave function collapsing. But why should nature behave differently depending on whether we are looking or not? And why should it be random?

Not everyone is worried. “If you want to explain everything we can observe in our experiments without randomness, you have to go through some really weird and long-winded explanations that I am much more uncomfortable with,” argues Marcus Huber, a professor of quantum information at the Technical University of Vienna. And indeed, you can get rid of randomness if you accept that the future can influence the past, that there’s more than one outcome to every measurement or that everything in the universe is predetermined since the dawn of time.

Another problem is that quantum mechanics seems to give rise to contradictory facts. Imagine a scientist, Lisa, inside a lab measuring the location of a particle. Before her colleague, Nikhil, knocks on the lab door and asks what outcome she saw, he would measure Lisa as being in a superposition of both branches – one where she sees the particle here and one where she sees the particle there. But at the same time, Lisa herself may be convinced that that she has a definite answer as to where the particle is.

That means that these two people will say that the state of reality is different – they’d have different facts about where the particle is.

There are may other oddities about quantum mechanics, too. Particles can be entangled in a way that enables them to somehow share information instantaneously even if they’re light years apart, for example. This challenges another common intution: that objects need a physical mediator to interact.

Physicists have therefore long debated how to interpret quantum mechanics. Is it a true and objective description of reality? If so, what happens to all the possible outcomes that we don’t measure? The many worlds interpretation argues they do happen – but in parallel universes.

Another set of interpretations, collectively known as the Copenhagen interpretation, suggests quantum mechanics is to some extent a user’s manual rather than a perfect description of reality. “The Copenhagen interpretations what they share is at least a partial step back from the full-blown descriptive aim of physics,” explains Chris Timpson, a philosopher of physics at the University of Oxford. “So the quantum state, this thing which describes these lovely superpositions, that’s just a tool for making predictions about the behaviour of macroscopic measurement scenarios.”

But why don’t we see quantum effect on the scale of humans? Chiara Marletto, a quantum physicist at the University of Oxford, has developed a meta-theory called constructor theory which aims to encompass all of physics based solely on simple principles about which physical transformations in the universe are ultimately possible, which are impossible, and why.

She hopes it can help us understand why we don’t see quantum effects on the macroscopic scale of humans. “There’s nothing [in the laws of physics] that says it’s impossible to have quantum effects at the scale of a human being,” she says. “So either we discover a new principle that says that they really are impossible – which would be interesting – or in the absence of that, it is more a question of trying harder to create conditions in the laboratory to bring these effects about.”

Another problem with quantum mechanics is that it isn’t compatible with general relativity, which describes nature on the largest of scales. Marletto is using constructor theory to try to find ways to combine the two. She has also come up with some experiments which could test such models – and rule out certain interpretations of quantum mechanics.

You can listen to Great Mysteries of Physics via any of the apps listed above, our RSS feed, or find out how else to listen here. You can also read a transcript of the episode here. Läs mer…

Mystic Meg: fortunetellers have always been popular, despite a long history of efforts to silence them

Since her death on March 9, celebrities and clients have been paying tribute to Margaret Ann Lake, better known by her stage name “Mystic Meg”. In a career spanning five decades, Mystic Meg went from writing horoscopes to predicting winners on the live National Lottery broadcast from 1994 to 2000.

From Joseph interpreting Pharaoh’s dreams, to Elizabeth I’s astrologer John Dee (1527-1608), predicting the future has long been a path to fame and fortune. But unlike the many fortunetellers who came before her, Meg was able to practice her art without fear of persecution.

In the biblical Judaic culture of Joseph, magical practices were tolerated, but considered suspect and dangerous. And John Dee may have earned the protection of the queen, but he needed it. Throughout his long career as an astrologer, he was accused of witchcraft several times.

These accusations of harmful magic were often combined with the suspicion that fortunetellers were frauds taking advantage of popular credulity. In the 17th and 18th centuries, many European countries abandoned attempts to prosecute witches.

New legislation, such as the UK’s 1735 Witchcraft Act, focused on fraudulence alone. The act was used against spiritualists, psychics and astrologers up until the second world war.

The Magic Circle by John William Waterhouse, (1886).
Tate Britain

Across the Channel, from the 18th to the 20th centuries, the French authorities waged a long and unsuccessful war on magicians of all kinds. Although many of the men and women who ended up on trial were rural “wise women”, “wizards”, or “cunning folk”, others were not that different from Mystic Meg and the astrology hotlines of the 1990s.

The “Red Witch”, Jean-Jacques-Maurice Talazac, preferred telling fortunes by post in an age when telephones were still a luxury. But unlike Meg, Talazac’s trade was illegal. He was prosecuted in 1908 and again in 1916 and sentenced to several months in prison, as well as a fine and costs.

So why is it that whenever the authorities have tried to repress fortunetellers for good, they have failed? Perhaps a fellow magician’s Twitter tribute to Meg offers a clue: “She defied the dreary sceptic,” wrote Uri Geller, “as did her fans.”

Fortunetellers and their fans

Critics of astrology, tarot and other popular magical practices tend to have a black and white view of what draws people to supernatural pursuits and how audiences treat prophecies and divination.

European thinkers in the 19th century saw attitudes to magic in racial terms, arguing that where “civilised” Europeans knew the difference between entertainment and reality, non-western cultures were too primitive to see magic as deception.

Witches apprehended, a British pamphlet from 1613.
Wellcome Collection, CC BY

More recent work by anthropologists, sociologists and historians has not only questioned these racist assumptions about primitive credulity, but also increasingly shown that attitudes to magic in modern Europe remain flexible and uncertain.

Mystic Meg’s many fans could enjoy her predictions on the National Lottery Live or read her horoscopes in the paper without coming to any final decision about the reality or impossibility of the powers she professed.

In desperate times, even the most rational among us find it hard to dismiss bad omens. Why is it so hard to throw darts at a picture of someone you love, if you do not believe this symbolic attack can cause real physical harm? As musician Regina Spektor sings: “no one laughs at God in a hospital.”

Gypsy Fortune-Teller by Taras Shevchenko (1841).
National Museum Тaras Shevchenko

Critics of superstition have often painted openness to magical interpretations as weakness or moral failing. From 18th century crusaders, such as Voltaire, to more recent psychologists, many have pointed out the real social costs of erroneous beliefs. But historians have discovered that where magic led, science often followed.

When the Nobel prize-winning scientists Frederick Soddy and Ernest Rutherford proved that atoms could be broken in 1901, Soddy’s first thought was that this was “transmutation” – like the famed transformation of lead into gold sought in Renaissance alchemy.

Rutherford retorted: “For Mike’s sake, Soddy, don’t call it transmutation. They’ll have our heads off as alchemists.”

Mystic Meg’s claims were largely limited to the star signs of likely lottery winners, or romantic predictions for the week ahead. But perhaps her own good fortune was to have risen to fame in a culture where the most dangerous associations of magic had mostly disappeared. Läs mer…

The world is hooked on junk food: how big companies pull it off

It is almost impossible nowadays to listen to the radio, watch TV or scroll through social media without being exposed to an advertisement telling us that all we need for a little happiness and love is a sugary drink or a fast-food snack. There’s nothing that a tasty, affordable, ready-made meal cannot fix, we are asked to believe.

Over many decades our food environments have relentlessly been encouraging us to make choices that are harmful to our health, through pricing, marketing and availability. This rise in advertising has contributed to a growing global obesity crisis as well as nutrition deficiencies as more and more people opt to eat unhealthy food.

We each have the right to buy whatever we can afford. But commercial forces limit our freedom of choice more than we think. New evidence published in The Lancet shows that key causes of ill health – such as obesity and related noncommunicable diseases – are linked to commercial entities with deep pockets and the power to shape the choices people make. They do this by influencing the political and economic system, and its underlying regulatory approaches and policies.

Industry tactics

The ways that commercial entities shape our food environments to maximise their profits are known as the “commercial determinants of health”. They create an environment that drives us towards unhealthy choices.

There are three main ways they do this:

We are socialised to believe that, as adults, our food choices are a direct result of free will, and of freedom of choice. Yet for people with a limited amount of money, that “freedom” is exercised in a context largely shaped – and limited – by what food and drink manufacturers and retailers choose to produce, market and sell.
Marketing creates demand. Supermarkets are filled with ultra-processed foods with lots of added sugars, unhealthy fats and harmful additives. These food products are designed to activate your taste “bliss point” and make you crave more. Food and beverage manufacturers use unethical tactics to market them. They target children with manipulative imagery and stressed-out parents with “easy” solutions for feeding and satisfying their family.
Food and beverage companies’ profits strengthen their political influence. This is especially true in under-regulated markets in low- and middle-income countries. They use their economic power (employment, tax revenues) to support corporate lobbying that weakens government policy.

What can be done

The Lancet series maps out four ways through which governments, businesses and citizens can reduce the harms caused by big corporations and curb the power of commercial entities.

1. Rethink the political and economic systems.

Developing countries, including Bhutan, Ecuador and Brazil, as well as developed countries such as New Zealand and Norway, are beginning to pave the way for new frameworks that put people’s well-being first. In the UK, Scotland and Wales have also taken significant steps.

These frameworks measure commercial effects on health and the environment, and encourage commercial practices that promote health. Ways to do this include enforcing policies – such as the tax on sugar-sweetened beverages – that ensure commercial entities pay their fair share of taxes, and are obliged to account for the full costs of the health, social and environmental harms caused by the production, consumption and disposal of their products.

2. Develop an “international convention” on commercial determinants of health.

In practice, this would mean replicating and expanding global regulatory frameworks that work. The World Health Organization’s (WHO) Framework Convention on Tobacco Control has shown that public health policies can be protected from commercial interests. Since its adoption in 2003, the convention has had significant impact on public policy changes related to tobacco control around the world. It’s provided a framework for countries to develop and implement evidence-based measures to reduce tobacco use and the harms associated with it. Some examples include smoke-free laws; graphic health warnings on tobacco products; prohibition of tobacco advertising, promotion and sponsorship; and tobacco tax increases.

The Lancet suggests that, with support from the WHO and its member states, an “international convention” on commercial determinants of health should be developed. It is proposed that public health policy leaders and politicians replicate the tobacco control convention by making it legally binding for countries to comply with a set of principles or rules. The framework would have to be broad enough to cover the full range of commercial influences on health. These include mining, fossil fuels, gambling, automobile industries, pharmaceuticals, technology and social media (beyond the better-known alcohol and food industries).

3. Comprehensive food-environment policies.

One type of government policy proven to help protect and improve health is public procurement – how governments purchase goods and services. Governments can use their purchasing power to influence the food industry by encouraging the production and distribution of healthy food and limiting the availability of unhealthy food products.

In 2008, the mayor of New York City ordered city agencies to meet public food procurement standards for over 260 million annual meals and snacks. The standards apply to food from over 3,000 programmes at 12 agencies, including schools, hospitals and shelters. Nutritional requirements cover dairy, cereals, meat, fruit and vegetables, and set meal nutrient thresholds.

The Brazilian School Food Programme is another example of a national public-procurement policy with direct health benefits. The programme provides healthy meals to millions of students in public schools across Brazil.

It’s required to purchase 30% of its supply from family farmers. The programme has improved the health and well-being of students, and promoted sustainable and ethical food production practices. It has also successfully regulated the sale and marketing of food within and outside school premises.

Countries across the globe could benefit from adopting this model, including South Africa, where despite industry pledges not to sell to schools, unhealthy foods and beverages remain easily accessible and available in schools.

Read more:
South Africa must ban sugary drinks sales in schools. Self regulation is failing

4. Social mobilisation.

Citizens, civil society groups, activists, public health practitioners and academics can demand their right to health by calling for government action on commercial determinants of health. This can be done using a variety of strategies. They can raise their collective voice in support of evidence-based health measures; expose and oppose the harmful effects of commercial determinants on health and equity; and insist that commercial actors and governments are held accountable.

This article is part of a media partnership between The Conversation Africa and PRICELESS SA, a research-to-policy unit based in the School of Public Health at the University of the Witwatersrand. Researchers from the SAMRC/Wits Centre for Health Policy and Decision Science also contributed to the Lancet Series on the commercial determinants of health. Läs mer…

Industries can harm health in many ways: here are 3 that aren’t so obvious

A recent ground-breaking series of reports in the science journal The Lancet unpacks what commercial determinants of health are, and how they affect public health. It uses a new, broader definition of the determinants:

the systems, practices and pathways through which commercial actors drive health and equity.

Some commercial entities contribute positively to health and society. However, research shows that some commercial products and practices are directly linked to avoidable ill health, planetary damage, and social and health inequity. Large transnational corporations are especially to blame.

Read more:
Profit versus health: 4 ways big global industries make people sick

The Lancet series examines not just directly damaging products (such as alcohol or ultra-processed foods) but the commercial practices that influence human health, inequities in health and planetary health. The series highlights the need to better understand the diversity within the commercial world, and the variety of ways its normal operations harm humanity and the planet.

3 ‘hidden’ industries that can harm your health

Some seemingly benign – or even beneficial – industries actually have major and avoidable impacts on health. They contribute negatively to health in subtle or indirect ways.

The pharmaceutical industry is one. Its abuse of intellectual property to increase prices and limit access to essential drugs is a common trend. The pre-selling of COVID-19 vaccines to wealthy countries is a recent, massive-scale example. The industry’s longstanding resistance to lowering the price of antiretroviral drugs for HIV meant that untold thousands, mostly in developing countries, died because they lacked access to treatment.

Social media is another industry of particular concern especially given the increase in its consumption in recent years. A plethora of research confirms the adverse effects of social media on mental health, especially an increase in cases of depression and anxiety.

On top of this, other industries often use social media to promote harmful products and for “social washing”, a strategy employed by companies to promote themselves as more socially responsible than they actually are, purely for brand promotion. We have also seen an increase in “surveillance capitalism” whereby private information is gathered through social media use. The information is then used by, for example, junk food companies through platforms such as Facebook for the targeted marketing of unhealthy commodities.

Extractive companies have also been linked to various health and planetary harms. Air and water pollution, environmental degradation, fatalities, silicosis, and noise-induced hearing loss are just a few examples of these harms. A report by the South African Human Rights Commission has severely criticised the mining industry and held that this sector “is riddled with challenges related to land, housing, water and the environment”.

In the South African context, the harms created by the mining industry are particularly concerning given the knock-on, damaging socio-economic effects – for example as a result of the loss of breadwinners – on families and, often, vulnerable communities.

Harmful business practices

Not only can harm to global health come from a range of industries, it can also come indirectly from business practices. Three harmful practices are:

Next steps

Commercial determinants of health are clearly influenced by a much wider range of actors and practices than the more obvious product-related harms of the “big four” (tobacco, alcohol, ultra-processed foods and fossil fuels). No business entity is purely “good” or “bad”, but we have seen an increasing trend where companies use “beneficial” practices, such as sponsorship, donations and pledges to environmental causes, to mask harmful practices and influence politicians.

Without a common understanding that these industries are harming our health, no action can be taken against them. Holding industry accountable and stricter government regulation are the minimum actions needed.

The Lancet series authors are calling for a global move towards health-promoting models of commerce. This is a move away from emphasising profits and economic growth, and instead focusing on societal and planetary health and well-being.

This article is part of a media partnership between The Conversation Africa and PRICELESS SA, a research-to-policy unit based in the School of Public Health at the University of the Witwatersrand. Researchers from the SAMRC/Wits Centre for Health Policy and Decision Science also contributed to the Lancet series on the commercial determinants of health. Läs mer…

What are auroras, and why do they come in different shapes and colours? Two experts explain

Over millennia, humans have observed and been inspired by beautiful displays of light bands dancing across dark night skies. Today, we call these lights the aurora: the aurora borealis in the northern hemisphere, and the aurora australis in the south.

Nowadays, we understand auroras are caused by charged particles from Earth’s magnetosphere and the solar wind colliding with other particles in Earth’s upper atmosphere. Those collisions excite the atmospheric particles, which then release light as they “relax” back to their unexcited state.

The colour of the light corresponds to the release of discrete chunks of energy by the atmospheric particles, and is also an indicator of how much energy was absorbed in the initial collision.

The frequency and intensity of auroral displays is related to activity on the Sun, which follows an 11-year cycle. Currently, we are approaching the next maximum, which is expected in 2025.

Fox Fires, a short film inspired by the Finnish folk tale of the aurora borealis.

Connections to the Sun

Such displays have long been documented by peoples throughout North America, Europe, Asia and Australia.

In the 17th century, scientific explanations for what caused the aurora began to surface. Possible explanations included air from Earth’s atmosphere rising out of Earth’s shadow to become sunlit (Galileo in 1619) and light reflections from high-altitude ice crystals (Rene Descartes and others).

Read more:
Do the northern lights make sounds that you can hear?

In 1716, English astronomer Edmund Halley was the first to suggest a possible connection with Earth’s magnetic field. In 1731, a French philosopher named Jean-Jacques d’Ortous de Mairan noted a coincidence between the number of sunspots and aurora. He proposed that the aurora was connected with the Sun’s atmosphere.

It was here that the connection between activity on the Sun was linked with auroras here on Earth, giving rise to the areas of science now called “heliophysics” and “space weather”.

Earth’s magnetic field as a particle trap

The most common source of aurora is particles travelling within Earth’s magnetosphere, the region of space occupied by Earth’s natural magnetic field.

Images of Earth’s magnetosphere typically show how the magnetic field “bubble” protects Earth from space radiation and repels most disturbances in the solar wind. However, what is not normally highlighted is the fact that Earth’s magnetic field contains its own population of electrically charged particles (or “plasma”).

Model representation of Earth’s magnetic field interacting with the solar wind.

The magnetosphere is composed of charged particles that have escaped from Earth’s upper atmosphere and charged particles that have entered from the solar wind. Both types of particles are trapped in Earth’s magnetic field.

The motions of electrically charged particles are controlled by electric and magnetic fields. Charged particles gyrate around magnetic field lines, so when viewed at large scales magnetic field lines act as “pipelines” for charged particles in a plasma.

The Earth’s magnetic field is similar to a standard “dipole” magnetic field, with field lines bunching together near the poles. This bunching up of field lines actually alters the particle trajectories, effectively turning them around to go back the way they came, in a process called “magnetic mirroring”.

‘Magnetic mirroring’ makes charged particles bounce back and forth between the poles.

Earth’s magnetosphere in a turbulent solar wind

During quiet and stable conditions, most particles in the magnetosphere stay trapped, happily bouncing between the south and north magnetic poles out in space. However, if a disturbance in the solar wind (such as a coronal mass ejection) gives the magnetosphere a “whack”, it becomes disturbed.

The trapped particles are accelerated and the magnetic field “pipelines” suddenly change. Particles that were happily bouncing between north and south now have their bouncing location moved to lower altitudes, where Earth’s atmosphere becomes more dense.

As a result, the charged particles are now likely to collide with atmospheric particles as they reach the polar regions. This is called “particle precipitation”. Then, when each collision occurs, energy is transferred to the atmospheric particles, exciting them. Once they relax, they emit the light that forms the beautiful aurora we see.

Curtains, colours and cameras

The amazing displays of aurora dancing across the sky are the result of the complex interactions between the solar wind and the magnetosphere.

Aurora appearing, disappearing, brightening and forming structures like curtains, swirls, picket fences and travelling waves are all visual representations of the invisible, ever-changing dynamics in Earth’s magnetosphere as it interacts with the solar wind.

As these videos show, aurora comes in all sorts of colours.

The most common are the greens and reds, which are both emitted by oxygen in the upper atmosphere. Green auroras correspond to altitudes close to 100 km, whereas the red auroras are higher up, above 200 km.

Blue colours are emitted by nitrogen – which can also emit some reds. A range of pinks, purples and even white light are also possible due to a mixture of these emissions.

The aurora is more brilliant in photographs because camera sensors are more sensitive than the human eye. Specifically, our eyes are less sensitive to colour at night. However, if the aurora is bright enough it can be quite a sight for the naked eye.

Where and when?

Catching aurora in the southern hemisphere.

Even under quiet space weather conditions, aurora can be very prominent at high latitudes, such as in Alaska, Canada, Scandinavia and Antarctica. When a space weather disturbance takes place, auroras can migrate to much lower latitudes to become visible across the continental United States, central Europe and even southern and mainland Australia.

The severity of the space weather event typically controls the range of locations where the aurora is visible. The strongest events are the most rare.

So, if you’re interested in hunting auroras, keep an eye on your local space weather forecasts (US, Australia, UK, South Africa and Europe). There are also numerous space weather experts on social media and even aurora-hunting citizen science projects (such as Aurorasaurus) that you can contribute towards!

A rare sighting of the aurora australis from central Australia, with Uluru in the foreground.

Get outside and witness one of nature’s true natural beauties – aurora, Earth’s gateway to the heavens.

Read more:
Fire in the sky: The southern lights in Indigenous oral traditions Läs mer…

Fears AUKUS will undermine Australia’s defence sovereignty are misplaced

The AUKUS submarine announcement earlier this month reignited a long-running debate about how to best preserve Australia’s sovereignty.

The announcement addressed some key concerns. For example, the United States will sell (rather than lease) Australia its Virginia class submarines so Australia can keep these boats. The submarine commanders and crew will be Australian. The rotational deployments of US and UK submarines through Perth won’t become a foreign base. And Australia will ultimately build its own AUKUS class nuclear-powered submarines, likely in Adelaide.

Even so, the AUKUS announcement was met with sharp criticism. For some commentators, AUKUS is the last nail in the coffin of Australian independence from the US.

There are concerns about the reliance on others for technology and skills, especially regarding the nuclear reactors. Also, the massive investment allocated to the submarines may undermine a more balanced defence force needed for defending the continent.

What’s more, some analysts have questioned whether Australia can maintain independent military decision-making in future conflicts. For example, would Australia’s submarines be used to support the US in a war with China?

These concerns deserve serious consideration.

But many Australian strategists reject them. For them, AUKUS is less revolution than evolution, merely the logical extension of Australia’s robust defence cooperation with the US over many decades.

The AUKUS submarine plan represents a new shade of the dependency that Australia has always had on the US for advanced capabilities, and with which Australia has always been comfortable. So long as Australia is able to use these tools as it sees fit, the argument goes, then sovereignty is ensured. This is the way of the alliance.

We haven’t yet sacrificed our defence sovereignty or sovereign industrial capabilities on the altar of AUKUS.

Here’s why.

Read more:
AUKUS submarine plan will be the biggest defence scheme in Australian history. So how will it work?

What are Australia’s defence sovereignty objectives?

Many critiques of AUKUS go far beyond the specific issue of whether the proposed submarine pathway compromises Australia’s defence sovereignty. Instead, they touch on deeper questions of Australia’s strategic alignment with the US and the UK, and our national decision-making writ large.

For example, the most headline-grabbing critique is that AUKUS has deprived Australia of its freedom to choose what to do in a possible military contingency over Taiwan. But this hinges on a hypothetical future scenario, the answer to which cannot be known today.

We simply don’t know if Australia is now more locked into a potential US-China conflict than was already the case before September 2021, when AUKUS was first announced.

Answering that question involves far more than an assessment of just our submarine industrial capability.

We should instead judge the submarine announcement, and whether it undermines Australia’s sovereignty, against the actual procurement objectives that lay behind the need to replace the retiring Collins class submarines.

Will the submarine plan help Australia enhance its “defence sovereignty”? And will it help Australia build a “sovereign industrial capability” that gives future governments credible military options at a time of their choosing?

The 2018 Defence Industrial Capability Plan defined “defence sovereignty” as

the ability to independently employ Defence capability or force when and where required to produce the desired military effect.

If the Virginia class or AUKUS class submarines couldn’t be independently operated and needed US commanders or nuclear technicians, this would undermine our defence sovereignty. But this isn’t the case.

Similarly, if a future Australian prime minister wished to send a submarine on a mission and could only do so with US and UK approval and technical support, that would also suggest the government didn’t have full defence sovereignty. But this isn’t the case either.

A century of partnerships with others

The same 2018 capability plan defined a “sovereign industrial capability” as

when Australia assesses it is strategically critical and must therefore have access to, or control over, the essential skills, technology, intellectual property, financial resources and infrastructure as and when required.

In fact, Australia’s domestic defence industrial base has focused on controlling key elements of a capability, rather than manufacturing everything onshore. On the submarines, those will be the components to operate and sustain the boats from Australian shipyards.

Australia has, therefore, chosen to largely equip its defence force with the most advanced capabilities available from abroad – it’s the world’s fourth largest arms importer for a reason.

It’s worth remembering Australia has never had a truly sovereign submarine industrial capability. The cancelled program with France was but the latest in a century of partnership with others.

This has included:

jointly crewed boats with the UK before the first world war
dependence on the US submarine fleet operating from our ports during the second world war
British-built Oberon submarines in the Cold War
and Swedish-designed Collins class submarines in the 1990s, incorporating a US combat system and French sensors and radars.

In this sense, AUKUS isn’t a “Brave New World”. It’s more “Back to the Future” for Australia’s shipbuilding aspirations.

AUKUS is a sovereign choice

The dream of an entirely self-sufficient defence industry is inherently appealing. There’s something unsettling about relying on others for capabilities to defend oneself.

But Australia’s entry into AUKUS doesn’t only entail sovereign risks for Canberra. The US is also making a big bet putting its most closely-guarded nuclear reactor technology and boats in Australian hands at a time when it needs them most.

So what does the US get out of this deal? In 2021, US officials were at pains to reassure us there was no quid-pro-quo to the deal. But even if there were such a request, there’s nothing about AUKUS that locks Australia into actions future governments cannot withdraw from.

The UK received nuclear propulsion technology from the US in 1958 but stayed out of the Vietnam War.

Canada was also offered nuclear-powered submarines in 1988, but chose not to pursue the offer due to budget constraints and public opposition. That backtrack didn’t doom US-Canada relations.

Every day for the next half century, Australia’s leaders will wake up each morning and be free to make a choice about the future of the AUKUS partnership.

So, too, will the Australian people, who at each election will be able to vote for political parties who might offer different visions for the future of AUKUS. That is what it means to be sovereign. Läs mer…

Australia’s cultural institutions are especially vulnerable to efficiency dividends: looking back at 35 years of cuts

In January the Albanese government launched a new arts policy, Revive. Among its measures was a commitment to exempt Australia’s seven national performing arts training organisations from the efficiency dividend.

The directors of Australia’s national cultural organisations in the galleries, libraries, archives and museums (GLAM) sector might well have looked on in envy, but also in hope. Revive did not deal with their problems, but Arts Minister Tony Burke does recognise they are in deep trouble.

Staff at the National Gallery of Australia, for example, are working in mouldy rooms and using towels and buckets to mitigate a “national disgrace”. This week, Burke gave assurances the cultural institutions will receive increased funding in the May budget, but it is not yet clear how much, or for how long.

And for many of the sector’s ills, the efficiency dividend is to blame.

Read more:
’Arts are meant to be at the heart of our life’: what the new national cultural policy could mean for Australia – if it all comes together

Making cultural institutions ‘efficient’

The Hawke Government introduced the efficiency dividend – an annual decrease in government organisations’ funding – in 1987, levied at 1.25% annually.

While there was much window-dressing about greater efficiency and value for taxpayers, the overriding aim was budget savings. State governments have also levied efficiency dividends for the same reason.

The efficiency dividend has undermined the cultural institutions ever since. Senior public servants considered if big government departments were taking a hit, GLAM should not be treated differently.

But these institutions are not like other government agencies.

Entry charges were briefly levied at the Australian War Memorial.

While small and specialised – and therefore poorly placed to absorb continuing cuts – they are legally mandated to grow. But these institutions, required by law to “develop their collections”, can barely afford to preserve their existing materials.

The only place where economies could reasonably be made was in employment. As staff numbers and organisational capacity declined, successive governments told the agencies to find new funding sources, such as philanthropy or user charges.

Entry charges were previously levied at the National Gallery, and even briefly at the Australian War Memorial.

Both generated animosity among visitors, who rightly felt that, as taxpayers, they should not have to pay to see the collections maintained on their behalf.

Not neglecting, strangling

In the end, institutions were in the invidious position of maintaining some core functions while neglecting or abandoning others.

When the efficiency dividend took effect in the late 1980s, the newly established National Film and Sound Archive was forced to suspend acquisition to save deteriorating records.

By 2008 similar effects were evident across the board. Required to produce efficiencies each year, the Australian National Maritime Museum found itself cancelling some exhibitions while deferring or scaling back others.

The Australian National Maritime Museum was forced to cancel exhibitions.

The Australian Institute of Aboriginal and Torres Strait Islander Studies told a parliamentary inquiry staff were “racing against time” to preserve materials that would be “lost forever” in the face of staffing cuts.

The institute even reported the likelihood of having to “compromise” its repatriation program to adhere to the efficiency dividend in 2008, the year of the Apology. The hypocrisies involved here were boundless.

The agencies have often been told to do additional work, even as funding disappeared.

The Rudd government reduced the closed period of most Commonwealth records from 30 years to 20 in 2010. The National Archives would have to release two years of cabinet records annually for ten years. Meanwhile, the archives was failing to meet basic statutory obligations for ensuring timely public access to open period records.

In a 2020 review, David Tune reported the timeframe for examining and clearing records was “unachievable because of resource constraints”.

Governments have nonetheless continued to cut funding to these institutions. The Rudd government increased the efficiency dividend by 2% to a total of 3.25% for one year. In December 2015 the Turnbull government imposed another 3% hike with a view to saving A$36.8 million.

Emergency funding was soon required to keep Trove, the National Library’s popular database, operational. That was a more sensitive issue for nervous politicians: there are Trove users in every electorate around the country and they love it passionately. But a leaky roof in the building that houses Trove, the National Library, is harder to see – even from Capital Hill.

Read more:
Trove’s funding runs out in July 2023 – and the National Library is threatening to pull the plug. It’s time for a radical overhaul

Where to?

In 2018 the Coalition government, supported by Labor, was able to find $500 million for massive renovations at the Australian War Memorial. But it took concerted national action by 150 writers, an intense media campaign and the treasurer’s personal intervention to secure $67 million in 2021 to save vital records at the National Archives from disintegrating before they could be digitised.

If the Albanese government really cares about the future of Australia’s national cultural institutions, the government will exempt them from the efficiency dividend. Revive sets a precedent in relation to performing arts institutions. The National Cultural Policy Advisory Group Burke established has advised dropping the efficiency dividend for cultural institutions.

The unpalatable alternative is continuing the cycle of fiscal suffocation and emergency funding we have seen for decades. A government that creates emergencies for itself to solve can never be called efficient. And for citizens, there is no dividend.

Read more:
Getting more bang for public bucks: is the ’efficiency dividend’ efficient? Läs mer…

Teaching the ‘basics’ is critical – but what teachers really want are clear guidelines and expectations

Anyone watching the debate over the National Party’s recent curriculum policy announcement could be forgiven for thinking there is a deep divide in education philosophy and best practice in New Zealand. The truth isn’t quite that simple.

In fact, most (if not all) interested parties would agree that teaching and learning the basics of literacy and numeracy are vital. As one expert observer noted, the policies of the major political parties actually have much in common.

The National Party policy promises a curriculum focused on “teaching the basics brilliantly”. The government says much of this work is already under way with its current curriculum “refresh”. So where exactly is the issue?

The idea of mandated testing checkpoints clearly has some worried that the National Party’s policy is a return to a “back to basics” mentality that ignores or minimises other vital areas of teaching. As one headline had it, “KPIs are for businesses and boardrooms, not children and schools”.

While the basics are important, the argument goes, there are other things schools should focus on. That may be true, but it need not be so binary. Basic early literacy and numeracy skills are the foundation on which much other success is built.

Perhaps a better way to frame the discussion might be: a wider view of learning is important – and the basics are necessary.

Learning literacy is a complex process: handwriting skill is the best predictor of writing success.
Getty Images

Learning to read and write is hard

Foundations take time to put in place, however. With reading and writing, for example, it’s common for capable adults to assume that many of the foundational skills are easily achieved.

In fact, neuroscience shows literacy learning is a remarkably complex process. Learning to identify letters and the sounds associated with them, and learning to read and retain words, involves a kind of repurposing of the brain’s architecture.

Learning to correctly spell words is even more complex than reading them. Successful teaching of spelling requires clear and systematic guidelines. Mastery cannot be left to chance or done through rote learning lists of words.

Read more:
Has a gap in old-school handwriting and spelling tuition contributed to NZ’s declining literacy scores?

Another often undervalued basic skill is handwriting. It can be seen as purely a presentation technique and simply about neatness. But research shows handwriting skill contributes directly to writing achievement and is the best predictor of writing success in younger students.

Reading and writing also rely on a foundation of oral language skill, including understanding sentence structure and having a strong vocabulary. Being proficient with sentences is the building block for paragraph formation, essential to more advanced writing tasks. Vocabulary knowledge is a strong predictor of academic achievement, connected to both reading and writing success.

Clear guidelines and specifics: teachers want to know what denotes progress, and when they should be concerned.
Getty Images

What teachers want

None of these skills develop by chance. So the question becomes, how can a curriculum best support teachers to teach literacy from its foundations upwards, with as many students as possible succeeding?

In my work as a literacy facilitator, I find teachers want specifics. They want to know what to teach at each stage. They want to know what the children in their classes should be able to do within that year. They want to know what denotes progress, and when they should be concerned.

Read more:
Teachers need a lot of things right now, but another curriculum ’rewrite’ isn’t one of them

But the curriculum as a whole is necessarily broad and all-encompassing, to reflect the complex needs of society. The curriculum refresh groups learning in broad bands – and this presents problems for specific guidance and benchmarks.

In the English curriculum, one of the literacy goals for learners in the year 1-3 band is to “use decoding strategies with texts to make meaning”. This is far too broad to be helpful in teaching or assessment in any specific way.

More nuanced progress indicators are still being developed, but the draft examples suggest there will be more guidance in more specific age bands.

Read more:
Education expert John Hattie’s new book draws on more than 130,000 studies to find out what helps students learn

Guidelines and benchmarks

As well as through the curriculum, teaching will be supported by the Literacy & Communication and Maths Strategy and the Common Practice Model. As an educator, I hope the final versions of these documents will offer clear guidelines for both teaching and assessment.

And there are new resources recently provided to schools that contribute usefully to a systematic and successful approach to literacy teaching. These are based on current evidence of how reading is best taught. They include a progression of word learning framework, and decodable readers with lesson plans.

All of these resources should provide useful direction for schools in their literacy teaching. While we can never make the task of teaching literacy simple, specific guidelines can make the pathway for teaching more straightforward.

More focus on the basics need not be boring for learners, either. I recently observed a lesson where the children were learning to decode new words. At the end, a six-year-old said “that was fun, can we do more?” The act of laying foundations for literacy is anything but dull.

The National Party’s call for guidelines around “teaching the basics brilliantly” speaks to a vital part of a rounded education. More detail is now needed about what “brilliance” will mean in practice, just as we need more detail on the current curriculum refresh. Making foundation skills a key component of the curriculum may not be the whole answer, but it is absolutely necessary for overall success. Läs mer…

Safeguard deal shows Bandt’s Greens party has come of age

It is understandable if observers have struggled to neatly categorise the agreement struck on Monday between the Albanese Labor Government and the Australian Greens party to progressively reduce industrial emissions.

Across political, environmental, and even business circles, opinions vary over whether Labor’s substantial renovation of the Coalition-era safeguards mechanism represents a decisive break from Australia’s decade of climate paralysis or is simply more tinkering when structural reform was needed.

At best, it is modest progress that could unlock wider action on emissions while making it harder for new fossil fuel projects, particularly coal and gas mines, to get going. That’s the view of the Australian Conservation Foundation for example, and of the Grattan Institute’s Program Director, Energy, Tony Wood.

To that extent then, it is a net environmental positive. Perhaps even a major one.
But critics, such as Osman Faruqi, culture news editor for Nine Entertainment, are scathing. The former Greens party candidate believes his old party, whose raison d’etre is environmental protection, has been politically manipulated by Labor.

Under Adam Bandt’s leadership, the Greens played hardball in the negotiations, publicly slamming Labor’s climate credentials and demanding a full ban on all future coal and gas projects in exchange for its crucial Senate votes.

“You can’t put the fire out while you’re pouring petrol on it,” Bandt had said to any microphone he could find. He depicted Labor’s approach of forcing existing industrial players to cut year-on-year emissions while also allowing new fossil fuel ventures as contradictory.

This was, and remains, a serious point – even if in the end, Bandt’s party settled for a hard cap on overall emissions by the 215 industrial polluters, limits on greenhouse gas production from any new projects, and a ministerial obligation to step in if agreed emissions limits are exceeded.

Both sides claimed a major victory, which might be the best sign there is of a fruitful negotiation.

For the Greens though, long derided as a party happier in protest mode rather than legislative process, the deal hints at what might be a subtle shift towards seeking material outcomes, even if that means compromise.

To its supporters’ deep umbrage, the party had long carried the blame for having joined with the Coalition to bury Rudd Labor’s emissions trading scheme in 2009, known formally as the Carbon Pollution Reduction Scheme or CPRS.

According to a persistent narrative, the environmental party had succumbed to the mistake of “making the perfect the enemy of the good”.

Greens’ loyalists still bristle at this criticism, eager to point out that Bob Brown cooperated with the subsequent Gillard-led Labor government to deliver an economy-wide carbon price-cum-emissions trading scheme that was superior to the “flawed” CPRS.

Read more:
It’s the 10-year anniversary of our climate policy abyss. But don’t blame the Greens

Faruqi calls the Clean Energy Act of 2011 that instituted those advances “the most serious climate legislation in Australia’s history”.

While this is arguable, it rather ignores its fatal context – namely a polity so divided on climate following the demise of the CPRS that it boosted Tony Abbott and fellow centre-right climate deniers. This ultimately poisoned the well of public support for Gillard and for progressive policy more broadly.

Julia Gillard and Bob Brown sign their agreement in 2010.
Alan Porritt/AAP

Wherever one stands on this historical debating point, there is no denying the dark shadow of 2009-10 loomed over the most recent negotiations around Labor’s improved safeguards mechanism.

Bandt appears to have concluded that sinking another Labor attempt at emissions control – and on essentially the same grounds – would be hard to justify.

But if he was limited at that end by the risk of repeating the CPRS error, he was hemmed in at the other end by the very public interventions of Brown and others. Brown, the party’s revered founding leader, publicly defended his party’s need to vote down Labor’s bill unless it contained the coal and gas ban.

Bandt’s answer to this predicament was to talk tough and look immovable on coal and gas in order to secure other improvements.

What does this tell us about his leadership style and nature of the party he leads?

When Bandt succeeded Richard Di Natale three years ago, I suggested in these pages that he might drag his party further to the left, such had been his combative rhetorical style.

On reflection, I now think this underestimated the significance of his status as the party’s only lower house MP.

What the current episode shows, is that if anything, he has pulled his party towards a more practical orientation, with an emphasis on getting things done.

Read more:
Australia’s safeguard mechanism deal is only a half-win for the Greens, and for the climate

This may reflect several things at once. First, the party’s maturing electoral base, which is seeking something more than determined talk in the face of a gathering climate emergency.

Second, Bandt’s own experience as an MP rather than a senator. And third, the Greens party’s surprising success in the House of Representatives in 2022. In the federal election of May 2022, one MP became four. The presence of other lower house members may have affected the nature of debate and the identification of priorities within the party room.

After the 2022 federal election, one Greens MP became four.
Lukas Coch/AAP

Lower house electorates call for political representation that is qualitatively different, closer to voters, and therefore closer to political mortality. Terms last a maximum of three years, not six. To hold a House seat, an MP must connect with a broader cross section of voters than in the Senate.

The voting method for senators uses state-wide proportional representation. That means the Greens, as a Senate-based party, could afford to present arguments for strong action on global heating acceptable to a sliver of voters state-wide – sufficient for a senator or two from most jurisdictions.

In contrast, holding onto lower house seats forces MPs to consider the immediate on-ground economic and employment implications of policy positions.

It may well be that the very presence of three new lower house MPs in the Greens party room has shifted the balance of internal debates to more politically attuned goals. And to debates where the economic and employment downsides of policy must be more frontally addressed. Läs mer…