I Worked in the UK Government’s AI Risk Team. When I Spoke Up About Its Ethical Collapse, They Punished Me.
Caught between the system and my conscience, I chose to speak out.
**For the avoidance of doubt: I am not suicidal. I am young, in excellent physical and mental health, in full possession of my mental faculties and am looking forward to building a free, full life. Any sudden deterioration, unexplained harm to my person, my wellbeing or any unexplained interference with my communication must be treated with the highest suspicion.**
I: About The Writer
I was born in the UK to Arab parents and attended a state school before earning a BA in International Relations at SOAS, University of London and an MA in Intelligence and International Security from King’s College London’s Department of War Studies. Before joining the Department for Science, Innovation and Technology (DSIT), I spent almost two years working as a PA to litigation Deputy Directors in the Government Legal Department (GLD), a role I was assigned through the Civil Service Direct Appointment Scheme. I had previously completed an internship in the Cabinet Office’s Economic and Domestic Affairs Secretariat, working to support the facilitation of a Cabinet Committee tasked with public service recovery in the wake of the Covid-19 pandemic. I spent almost two years at GLD and consistently exceeded expectations during my time there. I received several bonuses, praise from colleagues, did the work of multiple people to cover staffing shortages, was entrusted to take on an additional secretariat role in boosting legal awareness across government, and was even offered the role of PA to GLD’s Director of Litigation. I respectfully declined the offer in order to pursue my dream role - AI Policy Advisor at DSIT.
I was assigned to work in the Central AI Risk Function (CAIRF), the team tasked with identifying, assessing and mitigating AI risks. As somebody whose academic research had always focused on the intersection of ethics, technology, and security, I believed deeply in the mission of CAIRF. This role was never just a job for me. It felt like a culmination of everything I had been working towards. But my dream role unravelled into something far darker.
I am not writing this for attention, fame or sympathy. I am a deeply private individual. I am seeking accountability.
II: My Experience in CAIRF
When I joined CAIRF in June 2024, I was thrilled to have the opportunity contribute to its critical mission. For the large majority of my tenure, I was the only full-time female colleague in an all-male team. The team consisted of around 12 people. My role quickly became overshadowed by a hostile work environment. As the only full-time woman, I was confronted with an environment dominated by male leadership, where my presence felt like something merely to be tolerated rather than nurtured.
The majority of my time in CAIRF was spent under the leadership of Dean Whitehouse, the Head of AI Risk Assessment. Despite his position as the public face and voice of AI safety at DSIT, he perpetuated a culture of disrespect. His behaviour towards me was not only dismissive but often belittling. On several occasions, he made sexist remarks, openly complaining in a team meeting that the audience of a west end performance of Mean Girls he attended was “full of girls, which was annoying”. This was just one of several instances where his flagrant disregard for women was put on full display. Dean had assigned me to work on a long-term project on AI-driven violence against women and girls (deepfake non-consensual intimate imagery). I find it deeply disturbing that someone with such little regard for women in daily interactions was overseeing work on gendered AI risks.
Returning from two weeks’ sick leave with a diagnosis of depression, I was welcomed back by a colleague who publicly called me “useless” in a meeting in front of my line manager and another colleague. This incident was not an isolated one but a symptom of the overall culture of disregard and disrespect that permeated the team. The constant undermining left me questioning my own self-worth.
But it wasn’t just the hostile environment. There was a deeper issue related to my identity. During a private and sensitive conversation with Dean, he questioned how my dual nationality impacted my political views on the conflict in Gaza. With a smirk on his face, he pushed me to opine on the conflict. This was a loyalty test, an attempt to undermine my professional standing by focusing on my ethnicity rather than my work or contributions. I was stunned at his questioning, which seemed designed to corner me into a politically compromising response.
This toxic environment was compounded by a profound lack of leadership accountability. When I raised concerns in my January resignation letter about how the hostile environment ultimately undermined the credibility and purpose of CAIRF, I was met with silence from the joint-heads of CAIRF. Days later, Dean removed me from all digital communications and group channels without acknowledgement of my departure, as if my presence, and my voice, had never mattered. This was a final insult, erasing my contributions to the team and signalling that I was expendable.
CAIRF felt like a boy’s club. I wasn’t there to joke, mock, posture, to stroke anyone’s ego or amplify my own. I just wanted to do my job. What is written here barely scratches the surface of what I endured in CAIRF.
Amidst this climate of exclusion, one colleague in CAIRF stood out for his integrity. This person consistently treated me with kindness and decency. He saw and respected me. After I left and before he moved to another department, I sent him a note of thanks. In his reply, he affirmed in his words the toxicity of the culture, describing it as “hostile and disrespectful” and expressed regret for not speaking up more forcefully at the time. He called the quality of my work “exceptional” and pointed out that some senior colleagues in CAIRF’s output was “never produced to that level of quality”. His words reminded me that the issue here was not me - it was the culture of the team. The egos. The insecurity and fragility masquerading as dominance and superiority. This is not just my story. It was seen, felt, and silently grieved by others too. I include this not to elicit sympathy, but to make one thing absolutely clear: I wasn’t imagining it.
III: Escalation to Emran Mian, Director General for Digital Technologies and Telecoms, and the Quiet Machinery of Institutional Retaliation
I didn’t come to Emran Mian as a threat. I came as a believer, in leadership, in structure, in the possibility that someone would do the right thing. I respected him deeply. I had spent one week covering in his private office and witnessed first-hand the pressure he was under, the expectations, the leadership he projected. I saw him as a man of integrity. I believed he would be the one to listen. In my first email to him, I defended him when my own colleagues treated his name with disrespect. I thought that surely, if anybody in this department would understand the weight of what I was raising, the harm I endured, the ethical rot in CAIRF, it would be Emran. I was so wrong.
After my initial email to Emran in early February, I had a virtual meeting with him. The retaliation began immediately after this and it hasn’t stopped since:
That same day as my meeting with Emran, I was remotely locked out of my work mobile phone. The device screen displayed a message stating that: “This phone has been reported missing. Please return it to Whitehall immediately”
My leaving letter was not dated or sent around my actual resignation in early January, but was issued the day after my meeting with Emran.
I received an unsolicited message from a senior civil servant in DSIT, attempting to involve me in a ‘culture review’ of CAIRF - despite Emran having acknowledged on record that I had withdrew consent to participate.
Emran’s responses throughout were classic deflection. He dismissed the substance of my concerns and instead implied that I was to blame for not raising concerns earlier. I reminded Emran that the issue wasn’t timing, it was the unprofessional and disrespectful leadership in CAIRF and that I was never safe to speak in the first place.
Within days of asking Emran to clarify that DSIT had no intention of meaningfully addressing the harm done, I was issued a retaliatory debt of £854.02 for a salary overpayment that, when I disputed it, DSIT themselves admitted on record was a result of the department’s own administrative delay in processing my leaving form upon my resignation. I was given fourteen days to pay the alleged debt. This was a clear attempt to exert financial and psychological pressure on me.
Six minutes after corresponding with an employment law firm, I received an automated message from a UK number telling me to contact the number via WhatsApp. Despite being abroad and no longer receiving unexpected or unscheduled calls from the UK, this call strongly suggests metadata surveillance of my email address.
Just this week on Wednesday 23rd April 2025, as I had been taking measures to enhance my digital security, I received a phone call to my UK sim that I had inserted into a new device. The area code? 01288 - Bude, Cornwall. The same location as the listening station of a GCHQ base operated by the signals intelligence (SIGINT) service.
Whilst I did not answer their call, I heard their message loud and clear: We’re watching you.
IV: Legal Reckoning: Discrimination, Duty, and Disclosure
When I explicitly questioned DSIT on the unlawful discrimination I faced, under the Equality Act 2010 on the grounds of my race, sex an disability (all protected characteristics), I was met not with an acknowledgment of harm, but with an outright denial of the facts and deflection. I challenged DSIT on its failure to uphold the Public Sector Equality Duty (PSED): a legal requirements for public sector bodies to eliminate discrimination, advance equality of opportunity, and foster good relations between those with protected characteristics and those without. The failure was palpable. I also pointed out DSIT’s violation of the Public Interest Disclosure Act 1998 (PIDA), which is designed to protect those who blow the whistle on institutional wrongdoing. DSIT lawyers responded with a panicked, poorly considered email. They fabricated lies regarding the unsolicited culture review request and implied that all of the above retaliation was simply business as usual. This legal reckoning is the bedrock of my claim that the institutional abuse, neglect and active silencing I endured were not accidental but a systemic failure - a failure that now, under scrutiny, makes it impossible to trust DSIT to protect the public interest.
V: The Gatekeepers Who Locked Me Out From Justice

The one person who could’ve shut this down was Emran. He chose not to.
I reached out to Open Rights Group - a digital rights organisation I had once volunteered for as a student. Jim Killock, ORG’s Executive Director, expressed serious interest at first. When I made clear that the issue was too detailed to explain over a phone call and that ORG should read the bundle I had prepared, they flinched.
Journalists showed interested but wanted a killer headline, a spin. They didn’t realise: this was so much bigger than killer robots.
I was referred by a friend who had been a client of McAllister Olivarius, a transatlantic law firm specialising in employment and discrimination law. Mcallister Olivarius’ stated purpose is to “make whole those who have suffered injustice”. I was put in direct contact with the firm’s Managing Partner, Jef McAllister. He did not reply to my emails. After going through the firm’s general intake, I was linked to Genevie Kuiper-Isaacson, a Senior Associate with expertise in race, gender, disability discrimination and whistleblowing - all highly relevant to my situation. During a free 45 minute consultation, I talked Genevie through specific instances of discrimination I had faced during my time in DSIT but was not given the chance to explain the ongoing retaliation I was and continue to face. Genevie assessed that my claim would have no merit due to supposed issues with statute of limitations and I was later advised by McAllister Olivarius to seek a second legal opinion.
I sought a second opinion from Slater & Gordon and was linked to a newly qualified solicitor, Hannah Ferry. I paid £150 for a 45 minute consultation that lasted less than 25 minutes. Hannah laughed at me, and even asked if I was planning to return to the Civil Service. In her summary email of the meeting, despite her thinking my case had no merit, she was more than happy to try to upsell me interventions that she herself didn’t think would work. This was more than just unprofessional or tone-deaf, it was predatory and dehumanising.
As the retaliation deepened, I reached out again to Genevie at McAllister Olivarius and copied in a legal trainee, Eu-Fern Lai, due to her prior involvement. I explained the deep isolation and the escalating retaliation I’m facing from DSIT. Despite my email being explicitly addressed to Genevie, the reply came from the trainee who wrote:
“We are very sorry to hear of the continued trouble with the DSIT. Unfortunately, our position remains the same as before with regard to your matter: regretfully, we are unable to assist”.
This was legally incoherent. They mischaracterised the nature of my claim and relied on a misapplied statute-based refusal whilst simultaneously acknowledging the harm was ongoing. I removed Eu-Fern from the chain and reached out to Genevie but she did not respond. I made it clear to Genevie over several emails that I was not seeking full litigation, just principled, private resolution. I offered more than 233% of the original fee quoted for a letter from her. I referenced my time in GLD. I included written testimony from a former CAIRF colleague who corroborated the hostile environment I faced. I even offered to draft the letter myself to ease capacity concerns. Genevie finally responded when I copied in Jef McAllister, not with support, but to double down on the same baseless logic, once again asserting the statue of limitations as barrier despite multiple emails that clearly explained the ongoing harm I was facing and that I was not seeking to take this to an Employment Tribunal. This was the final door slammed shut.
Whilst I had high hopes in McAllister Olivarius, the experience left me feeling dismissed and unsupported. Despite my efforts to explain the ongoing and escalating retaliation, and my proposal to pursue private, principled resolution, the engagement remained superficial. It is deeply disappointing that what could have been a meaningful intervention was reduced to procedural obstruction.
The institutional failure wasn’t abstract. It was made possible by individuals who prioritised self-preservation over justice, silence and deflection over professional and moral responsibility.
In March, DSIT’s Permanent Secretary Sarah Munby announced her plans to leave the Civil Service this summer. It is a quiet exit, but it speaks volumes. Her departure mirrors the wider rot: a system that steps back instead of stepping up, that turns away instead of facing what it’s built.
But what happens when the system itself is diseased?
VI: What This Means
This is not just about a dysfunctional government department or a string of institutional failures. This is a systems-level contradiction. This is about the soul of our systems, what they reflect, and what they are becoming.
I worked in DSIT’s Central AI Risk Function - the very team tasked with identifying and mitigating against bias and harm in AI systems. But when I raised the alarm about the ethical collapse happening within the team, I wasn’t met with integrity or accountability. I was met with silence, erasure and surveillance.
This is a case study in how ethics fail. And when ethics fail at the most basic human level, they will collapse catastrophically at the machine level. If the people regulating AI safety cannot uphold basic human decency, how can we trust them to ensure fairness in the algorithms that shape policing, healthcare, employment?
This goes far beyond one civil servant or one department. It is a warning to every government, company, and policymaker laying the foundations for AI-driven governance.
The UK presents itself as a leader in responsible AI. But if this is the culture behind the curtain - what is being exported to the world?
A “pro-innovation approach to regulating AI” may sound compelling. But it is an oxymoron. Guardrails are not the enemy of progress. They are its precondition. Earlier this year, DSIT’s Secretary of State Peter Kyle MP, stated that AI adoption will be ramped up to “boost economic growth, provide jobs for the future and improve people’s everyday lives”. But when civil service roles are automated, when trust in institutions collapses due to unaccountable AI governance, when the workforce is largely displaced by AI - then what economy are we left with? You cannot turbocharge an economy when the very people who are the economy, its workers, are destabilised, devalued or replaced. What you’re boosting isn’t prosperity. It’s inequality.
Global questions must now be asked:
Can governments be trusted to regulate AI whilst silencing internal dissent?
What does “AI safety” mean if the humans behind it perpetuate harm?
How can Prime Minister Keir Starmer justify using AI to automate civil service roles when the government’s own AI risk team is in moral collapse, setting a dangerous precedent for governments worldwide?
Those with the power to ask these questions and act on them need to pay attention.
This is not just a national concern. This is an international signal that ethical failure at the human level will become systemic failure at the machine level. If this model of unaccountable AI governance spreads, the cost will not just be measured in careers. It will be measured in lives, freedoms, and the public’s ability to trust any institution again.
We are sleepwalking towards irreversible oblivion - not because of the technology we’ve built, but because we’ve never dared to examine what we’ve built them in the image of. Carl Jung describes the shadow as everything we deny, repress and refuse to acknowledge about ourselves. Our biases, our hunger for power and control, our capacity for harm. What we refuse to face, we unconsciously enact. And so we have begun to automate the collective shadow. We’ve coded it into our decision-making systems.
What happens next won’t just define how we govern technology. It will define whether we are capable of governing ourselves.
This is not just about ethics. Or politics. Or technology.
This is existential.
VII: This Is Not The End
It’s just the beginning of my archive.
I will continue to publish every single day until DSIT acknowledges the harm done and takes meaningful accountability. I am sitting on a mountain of evidence that verifies everything I have written here. I will not be erased, silenced, bullied or intimidated.
As always, I remain open to resolution. Not out of weakness, but because I believe in justice over vengeance.
See you tomorrow. I will be back with more.
Follow me:
X/Twitter: @syro_001
Bluesky: @syro001.bsky.social
Mastodon: @syro001
Although I’ve never lived in the UK, what you tell us about the ethical rot, systemic indifference and closing of ranks doesn’t surprise me.
On a separate note, I’m glad your visit to Japan allowed you to go beyond the superficial touristic experiences and make deeper connections. I’ve lived here 31 years and Japan is full of contradictions, paradoxes and enigma. But I’m here because I’ve made deep connections. I’m immersed, so I rarely think about it in a detached way, but I suppose two of the key factors are that generally people are respectful and egos are kept in check in daily life and interactions. Let me just add that the noisy crows are the least of your worries on Kamakura beach. It’s the black-eared kites that will bloody your face to steal a chip (speaking from personal experience). Wishing you the best of luck in your endeavours.
Hi Syro, I'm glad to hear you speak up. As a 16y/o, I'm only prematurely introduced to self-interest, morals, and ethics. I've heard that AI will change how our systems run, though I don't know the extent or whether humanity gains a net benefit or detriment. I'm still learning about what really makes AI usage "moral," "ethical," and "regulated." What do you envision "using AI ethically" as?