Upcoming events

May
22

Genevieve Clifford: Transgender Experiences of Digital Poverty in Wales

On 22nd May 2025 at 3:30 pm

Theory Lab, Computational Foundry

I present in this session a paper based around the literature review I have undertaken for my PhD, which centres transgender experiences of digital poverty in Wales. I begin with extant knowledge of digital poverty generally, highlighting a distinct lack of transgender (and even just queer) experience. In lieu, I deconstruct Ruiu and Ragnedda’s (2024) three-tier hierarchy of determinants: individual, circumstantial, and structural. I work with this understanding by investigating how transgender people can be represented within it, but criticise its hierarchical nature. I outline the benefits of this “collapsed” determinant model against a more positivistic trichotomy of cause, effect, and intervention. Here, I also focus on Welsh Governmental policy, noting a lack of specialism at the intersection of transgender and digital poverty.

Further, interrogating “trans digital” and “trans poverty”, I question whether poverty can be meaningfully separated from digital poverty in an increasingly digital society, with trans people needing to be online to find community, to participate in wage labour (or state welfare, where available), and to seek healthcare and leisure. However, transgender political economies make this increasingly difficult, amidst a backdrop of regressive legislation in the UK and elsewhere, hostile actors and the wider “enshittification” (Doctorow 2022) of the Internet.

May
29

Andrew Martin: On Trusting Trustworthy AI

On 29th May 2025 at 2:00 pm

401 (Board Room), Computational Foundry

Trust – and with it, related notions like trustworthiness – has a long history in philosophy, business, and other settings.  In systems thinking, it has at various times been in favour as a counterpart of security.  Meanwhile, we see increasingly many systems based on ‘AI’: and calls for that AI to be trustworthy.  Applying ill-defined criteria to an ill-defined class of systems does not appear to be a constructive pastime.   We propose a hierarchy of kinds of trustworthiness.  The high-level concerns of traceability, fairness, ethics, responsibility, explainability, and so on must be supported by a layer of what has come to be known as ‘trusted execution’ – in order to anchor a computation or service in the code and data which delivers it.  That layer in turn needs a foundational layer to provide a body of evidence on the provenance of the data and models in use: this foundation can be provided by an ‘AI bill of materials’ derived from the SBOM concept.  This talk reports work in progress.