Episode 15: What Algorithms Do
In this episode: what do algorithms do? And what does that have to do with aging?
Featuring Kim Sawchuk (Professor of Communications Studies, Concordia) talking with host Sally Chivers about:
- Algorithms and ageism
- Misinformation, disinformation, and malinformation
- Data Harms
- The need for better policy and trusted experts
- AI and the digital swamp
Learn more about the Aging in Data research project.
Read Caring for data in later life – the datafication of ageing as a matter of care by Vera Gallistl and Roger Van Laufenberg
Discover Joanna Redden’s work at the data justice,lab especially on data harm.
Wrinkle Radio is a proud member of the Amplify Podcast Network. We are grateful for funding from the Social Science and Humanities Research Council of Canada and support from Aging in Data and VoicEd Radio. This episode includes music by Duce Williams, Jamison Dewlen and Michael Shayne's via ArtList.
Stacey Copeland 00:01
You're listening to the amplify Podcast Network.
Kim Sawchuk 00:15
Algorithms make decisions about who gets a loan from a bank, that's age based and also based on gender. So you can have a great credit rating, but if you don't keep that credit line going beyond the age of 70, and you switch banks, you may not be able to get the $50,000 credit or $20,000 credit limit you need. You may only have 10,000, which also assumes that people don't work beyond the age of 70. An algorithm related to insurance is in place so that in Mexico, if you're over 75, you can't rent a car. There's all kinds of ways that age crosses with insurance systems, financial systems, increasingly healthcare systems, as we enter into public private partnerships, where does our health data go? Who's actually tracking it? Is it secure?
Sally Chivers 00:58
Sally, welcome to wrinkle radio, where the stories we tell about aging matter I'm your host, Dr, Sally Chivers, and I am so glad you're here. I'm joined again by Kim Sawchuk. At the end of the last episode, we talked about what algorithms are. In this episode, we talk about what algorithms do.
Kim Sawchuk 01:41
Algorithms: when you use them to buy something online, they have your credit card, they'll know your age, they will know your location, they'll know the time of day you did. They'll be able to see your history of purchasing behaviors. And you start to see the way that that produces this kind of algorithmic imaginary of aging, where suddenly, at 55 I'm getting things about my saggy butt and my saggy everything, my jowls and consumer solutions that are there to make sure that I don't feel the nefarious effects of aging and I don't look old, but the representation and message that's sending out socially and culturally is that aging is shameful and it's something to be feared, but we have a solution. It's called a product. that's kind of one dimension of it.
Sally Chivers 02:26
That's Kim Sawchuk, Professor of Communication Studies at Concordia University. She's also the director of the Act lab that stands for aging communication technology. She's Director of the Aging in Data project, and she's Director of engAGE: Concordia’s Centre for Research on Aging. The algorithmic imaginary of aging sounds enticing and even compelling, and maybe so, but that imaginary too often relies on exploitation, and that exploitation is based on the ageism we fight on Wrinkle Radio. Invading our privacy to sell us products is creepy, but it gets creepier.
Kim Sawchuk 03:15
Algorithms also amplify what in political terms people talk about as MDM, misinformation, disinformation and malinformation. So misinformation, we see that happening when people are basically not telling quite the whole truth about something or providing misleading information, but perhaps intentionally, perhaps not intentionally. Disinformation is when there is a serious attempt to put out something that is absolutely not true, to have a particular end, either politically or swaying people in terms of things like vaccinations, COVID revealed a lot of that mal information is like just bad information. Those are kind of examples. We know that in 2019 and 2020 and the Canadian government has documented this on the site, that there was all kinds of misinformation, disinformation, and malinformation about vaccines. We know that science isn't perfect, but we do know that there's data out there that's better or worse, and so people can't make good decisions if they're being targeted as a population with misinformation, disinformation or malinformation based on their user profiles that can then be tracked and correlated. And that's what algorithms allow you to do. That's what companies that have the capacity to collect big data, they do, but it's still within the realm of probability, which means sometimes you hack the system by not behaving as they expect you and your age cohort to act. So I'll get stuff all the time that's about trucks and knives because I camp.
Sally Chivers 04:41
I am five foot two, and that's if I stretch and round up. So I have to laugh out loud when I get Facebook ads trying to sell me big and tall clothes. Is the algorithm out of whack? Probably not. I'm in lots of disc golf Facebook groups. The ideal physique for flinging a frisbee is tall, lanky and long limbed. Maybe I should find another hobby. You say, Yeah, okay, maybe. Still, the algorithm has made a pretty good guess. Maybe it's even trying to help out. What do algorithms do?
Kim Sawchuk 05:22
Algorithms, you know, they search for us, so they're helpful. They can rank, and they give information in terms of an ordering and they recommend. They do kind of three things, so they can control what it is we get in terms of information as we do our daily Google search. We have to be aware that all of us are more and more reliant on these systems for finding information about health, for example. So where are people going? How do we know what a reliable source is anymore?
Sally Chivers 05:49
For the second episode of Wrinkle Radio on Age panic, I Googled aging. Just aging. Every hit that came up was about healt., I clicked on the first link from the Mayo Clinic. It told me all about what types of physical decline I needed to worry about. The Mayo Clinic seemed to me like a reliable source, but I didn't do a lot of work to make sure I really was on the Mayo Clinic site, that this was the research based part of the Mayo Clinic and not some commercialized version of it. Plus, when I typed aging into Google, it spat out what an algorithm imputed would appeal to a middle aged woman who Googles far too many medical ailments. I was still in grad school when the internet was starting to reach people at home. We believed it was going to be this democratic space that would change public discourse for the better. Now, I kind of want to go back and give all of us a hug for our endearing optimism. But the problem isn't really the internet or even the algorithms in and of themselves.
Kim Sawchuk 07:02
People who engage in misinformation, disinformation and malinformation are increasingly more sophisticated, and so people are targeted with scams, given opportunities to get a really great deal on something they want, and then if you click on this, you're going to get that. There's ways that the software, but also the interface, makes it difficult for you to even find where the x is to say, I don't want to see this ad, and then you end up clicking on an ad, and suddenly you're being tracked as being interested in this product. Even now, we're being asked about our cookies, and I realize how difficult it is to figure out what it is I'm actually saying I want them to use what's necessary for me to be able to use the system, but they'll put it in language like confirm your choice or accept all, reject all. It's very confusing. So you can make mistakes, and suddenly you're accepting all these ways of being tracked that you actually don't agree to.
Sally Chivers 07:51
I hate to live with the idea that there are people out there to get me, but predators frequent the digital world as much as anywhere. I'm not just talking about the stalkers. I'm talking about those who hide under a cloak to make money off fear or just to make money. Some legitimate businesses track your habits desperately guessing what color of hoodie will get you to click. Some nefarious interlopers feed you garbage to change how you vote. Well intentioned or not, this exploitation thrives because of those early internet promises of democratized access to knowledge, because they can hide behind the mysterious processes that drive the engine, because the Internet is all about profit. Your attention is currency.
Sally Chivers 08:48
Algorithms, of course, accelerate this ridiculousness, but data harm comes from the forces behind the algorithms. Kim's been doing research specifically on how data harm relates to aging and older adults.
Kim Sawchuk 09:04
Algorithms are connected to what we call data harms, and that we've been looking at, which are the adverse effects that are caused by the use of data by organizations or others that may impair, injure, set back either a person, an entity or society. And so thinking about this from the perspective of age, is thinking about how does our aging self intersect with these calculation machines, with digital systems and within a networked society, and what potential opportunities may be there for us if we can use these things well, and if we get good sources of information, but also just being aware that there are difficulties and liabilities and potential traps that come from our use of these systems that may not be so benign.
Sally Chivers 09:53
You might be sitting there trying to reassure yourself that this doesn't really apply to you. Maybe you live in Canada. So you feel more protected. Well, unfortunately, I have some bad news for you.
Kim Sawchuk 10:06
In 2018, the Canadian government finally made businesses start to report how many data breaches there were. In 2019 when they issued a report on the impact of data breaches on the Canadian population, 28 million Canadians in 2019 had been affected by a data breach. The number is unbelievable. Usually, data breaches are in relationship to large organizations or businesses like a bank. The Laurentian bank had a major data breach here, and so they can get a client list. They can get information not just on credit card numbers, but on bank balances.
Sally Chivers 10:48
When I was on sabbatical, my spending patterns changed. That happened to be when my bank instituted some kind of AI tool that, unbeknownst to me, created a budget. There are so many problems with this. For one, they base the budget on what I spent, while on reduced salary and traveling the world, and now my bank chides me that my spending is over budget in categories I had no expenditures in because I was away. It seems benign, but what if those also affect whether they approve a mortgage or offer me a decent deal on a credit card?
Kim Sawchuk 11:28
The algorithms produce those really nice little fancy things that you'll get from either a bank or wherever that sort of say, Here's your spending ratio or here's your whatever. Is that information helpful? Perhaps, sometimes I want to know what my spending patterns or behaviors are for sure. What's not so clear in the Canadian context is who has access to that information, whether that data is being bought and sold, and who's getting it, and then what the other kinds of measures we need to have in place as a society to prevent large scale cyber crimes, but also individual targeting of older adults. For example, I've looked at aggressive sales practices that where they would talk about Golden lists, which are basically figuring out location and cross referencing it with age, and location is often linked to income. And so you kind of then decide who you're going to make a call to to make yourself able to be able to more likely sell a service to somebody who wants to talk to you, and maybe at home and available. So telecommunications companies were doing that, and that's what we wrote about, and did policy statements with older adults to investigate that, but also starting to just understand that from working with older adults as a form of systemic abuse. And now I would call it a data harm, but the data harm stuff is connected to what we call older adult mistreatment. When you specifically target older adults using the potential of data to see them as a population who are not vulnerable in and of themselves, but are rendered vulnerable and are targeted for scams that are age specific, scams that are related to grandchildren or nephews, scams that are related to giving over financial. I almost got caught when my dad was sick and I was taking care of him, and I was exhausted, and suddenly I was realized I almost gave out credit card information to someone who sounded so plausible from my bank. How did they get my phone number? They had my cell phone number and the same bank that I used.
Sally Chivers 13:22
I remember falling for a scam. I'd just taken over as director of a research center. I got an email from my dean asking if I could talk. I wrote back instantly. I immediately got another email asking me to buy five Apple iTunes cards. I checked the email address. It was not a Trent University email address. I was mortified. I won't repeat what I wrote in my next reply, because then I'd have to slap an explicit warning on this episode, and I only want to do that when swearing is part of the point. I probably should not have replied again, but I was furious and embarrassed, plus who or whatever it was already knew I was there, answering that email address, so I figured there wasn't too much more to lose. Might as well tell them right where to go and where to put their gift cards on the way out. Kim offered some better ideas about how to protect ourselves from data harm.
Kim Sawchuk 14:19
How do we protect ourselves from it? Well, as individuals, we do campaigns with older adults like with RECAA, where we tell stories of data harms, and they share how they avoided them, so that they're not just seen as people who are victims, but as there's something you can do. But I also think, like everything, older adults are made to feel that if I confessed to the fact that I've been scammed, I'll be made to feel that people will think I'm losing my marbles and they're going to take away my agency, or they're going to start monitoring me, or I'm going to end up in some long term care facility that I may not want to be in. There's concerns of loss of agency that can amplify so there's a lot of under reporting of a lot of things, because there's shame, or it's my fault because I don't understand. So I've done a talk called “It's not your fault,” and I had about 80 people who came, and people were crying because they had been told that, basically, you don't know how to take care of yourself. How stupid can you be for clicking on that? How did you not know? Well, living in this data world is so complex that all of us are scrambling to keep up. So telling older adults, it's not your fault that we're all facing this challenge, and we can't just place it on individuals. We need to have better legislation, privacy protection laws, cyber security systems in place for all of us. That's important, because otherwise it's thrown back onto the individual, and people are made to feel like, what's wrong with you. That's with the data breach stuff to remind people that data harms and that algorithms are related to that. There's targeted campaigns. It's related to cyber attacks and cyber crime that is real, and these people are really sophisticated, and increasingly so.
Sally Chivers 15:54
A lot of the scams are financial. But for the last few political elections, a combination of treachery and algorithmic targeting has created data harms that putatively affect election results. Kim has suggestions about how we could protect ourselves and our democracies from those data harms.
Kim Sawchuk 16:14
We know that in the political realm, we're now contending not just with deep fakes, where they can take pieces of people's voices and make it sound like somebody is speaking when it's not them, but what's called cheap fakes, and we saw that related to age when Joe Biden looked like he was walking into a field and talking to himself, but the video had been cut, and the person he was speaking to had just arrived by parachute, wasn't in the fake. There's so much potential for manipulation of information and data, it's hard for an individual to keep up. That's why we do need to have trusted experts. The other thing that all of this does is the overall political intent is to try to make people not trust public institutions, governments and organizations, and not to trust systems, or to give up and go, it's all too big. There's nothing I can do about it. What the hell? I'll just click on everything, which is some days how I feel. I must confess.
Sally Chivers 17:04
Kim's suggestion seems so logical, but it's more subversive than it might appear. In the university realm, we're getting used to being devalued for the very thing we used to be valued for: our expertise. We have standards that govern how we conduct research, which is why I would hope the audience would trust us more, even though or because we work in a publicly funded institution. Now that information proliferates and becomes readily available beyond the so called ivory tower, university-grown knowledge might seem obsolete. Just Google it, right? If chatgpt can write an essay, why read one by a professor? But that easy availability layered with the many groups eager to peddle fake information, those together don't mean we should reject publicly funded institutions. Quite the opposite. We need to elevate the role of trusted expertise. I get that might sound self serving, since you're hearing it from a couple trusted experts, but since we are trusted experts, you might want to take our word for it. We're only going to change aging at a time when we perhaps most need to change it if we can prevent data harm. That's only going to happen if the data world takes seriously the expertise of researchers on aging, especially researchers working with older adults. The stakes are high.
Kim Sawchuk 18:34
For older adults, there can be exploitation like that, scams that are related to dating apps. There can be loss of services potentially, because if they know you're of a certain age and you're at a risk of getting a certain disease, potentially, can you still afford to get travel insurance, which raises exorbitantly for people once they hit 70, they can have information on your whereabouts or your non whereabouts. Well, how do you feel about that? How does that work? If, for example, you have a sensor system in a long term care facility that's monitoring and trying to do fall prevention, wouldn't that be great, but every time you move or something gets moved, an alarm goes off. So the alarm is always going off. So then finally, nobody ever ends up using the system because it's going off all the time, and you don't get a good night's sleep. Vera Gallistl has written about this in her ethnographic work and studies of the implementation of AI and algorithms in long term care facilities that are supposed to be really fantastic for preventing falls. There's a whole issue of how you feel about privacy and surveillance as you age, which, as you age, has liabilities because of the way that people think that you may be incapable and may judge you based on one slip that can be frightening to you, the way that insurance is calculated, the way that banks give out loans and provide financial security for you, the way that older adults may be targeted for politics or elections, the way that information or disinformation can be given out in terms of health care. The way that online services may be put on board that mean that you can't access the kind of rights that you might have access to in terms of a program, because it doesn't take into account you may not have heard about or because the information about services available was given online and you don't. There can be things around housing. How old are you? Should I give you a lease, etc? There can be things about jobs. There's increasingly automatic systems to deal with the large amount of incoming of CVS for a job that could filter because of age immediately, even though, in many places in Canada, age discrimination for employment is illegal. Then there's a whole question of just injustice related to creating technologies. For example, facial recognition devices are being thought of for being able to understand what people with dementia might be trying to say to us but cannot communicate. Laudable. But when the systems are built, they're not using the actual they're doing simulations of people with dementia, or simulations of older adults, like they do in long term care facilities, and not involving those people. And so you're creating simulations that may not be ones that are going to be accurate anyway. These are just kind of a list of examples of what we would call data harms, where we have to think about, how do we become aware of them? How do we study them? How do we involve older adults in analyzing them? And how do we create a better way that we can benefit from some of the good things.
Sally Chivers 21:30
That's a lot of potential and actual harm. Don't worry, there's a lot we can do together to protect ourselves from those harms.
Kim Sawchuk 21:39
We see working with the aging and data project, one of our companions, Unmil, has a cell phone from North America and a cell phone from Austria, and we've just been realizing how much scam stuff he gets on his American cell phone constantly to his number, and how little he gets to his Austrian phone, which means he doesn't have to a) spend all that time going through each of those messages, figuring out whether they're real or not. But also, there's something in place within the European system that is actually giving him time in his day to not have to deal with crap, and with the potential of being targeted for fraud. It's possible,
Sally Chivers 22:23
it's possible to have better policy, but it's really difficult to hack the system on an individual level.
Kim Sawchuk 22:30
Algorithms are often what we call black boxed you can't see what they are, and that's why studying them is so hard. They're proprietary. It's all about IP. We don't know often what those algorithms are and sort of reverse engineering is really difficult. That's why it's so interesting, when you go to conferences and you sometimes see people who've worked in the system, say, in the banking industry, whistle blow. That's how I felt when I worked in a marketing research office. I was a whistleblower, in a sense. This is how the system works. So asking older adults to hack the algorithm is difficult, but sharing stories of how you've been targeted is possible that will allow you to do what you can to understand and assess a situation and also to do it so that you're also playing, in a sense, with you know that you're being shamed, and you're also playing with ways that, in academia, we try and protect profiles.
Sally Chivers 23:23
Kim told me about what RECAA, respecting elders, communities, against abuse in Montreal has been doing to combat these data harms in their characteristic, playful but also very serious way.
Kim Sawchuk 23:37
What they did is they created a series of videos with Eric Craven, a wonderful person who we've worked with for years. They use Snapchat, and they were asked to tell stories about a time that they've been scammed or somebody had attempted a scam. And so they tell it as a cat or or as a piece of broccoli. Choose your own avatar, as a dog with a really great Sergeant Pepper kind of outfit on. So it was both playful, took away the shame of telling the story. People were telling the story of giving information. You know, here's what to look for. Check the email address to see whether or not it actually is the email address corresponding to your financial institution. Know that there are scams from Canada Post all the time for $1.39 it's not the money they want. It's your data they want and your credit card information. Here's a scam of if it's that cheap and looks like that dress is $5 it's maybe too good to be true. Shop from trusted sources. It's a pain to maybe use different authentications, but do it. Change your passwords. Write those passwords down as much as possible. Oh, my god, me and my passwords. Everybody has password agony, so trying to point out things that you can do to empower yourself as an individual, but at the same time, the message is that it's not just us, and it's not just you. We're being targeted, and these people have a lot of resources. And let's try and do the best we can, but certainly let's not feel any shame around this. And then let's lobby our governments to be able to put stronger consumer protection, privacy, information things in place. A lot of the government's efforts of dealing with online harms has to do with protecting children from pornography which they're obsessed with and aren't with these kind of everyday malpractices.
Sally Chivers 25:37
So far, we've been talking about what seems like garden variety algorithms. These are more like the market research Kim did earlier in her career. We are vaguely aware of them. Don't really understand them, and we're more harmed by them than we know. But there is a newer algorithmic kid on the block lately, and I'm going to be honest, I'm pretty sure he's a bully. Yes, I'm talking about AI Artificial Intelligence. We talk probably too much in the university sector about how chat GPT makes it harder to know whether students are cheating or not. Sometimes we talk about how to use these new tools well. But I'm more worried about other processes that are shortcut by using AI even when we don't know what's happening. I told Kim, when I interviewed her, that what I especially appreciated about her research on aging and data is that she gets under the hood to show us the social implications of the technologies she studies, even when certain parts of information are blackboxed and proprietary, Kim can explain how they work, especially when it comes to aging. So I had to ask her to give us a peek under the hood of AI.
Kim Sawchuk 26:55
It's all part of a different transformation of the same kind of I guess, what's happening with what algorithms can do, I would say, and so AI is predicated on calculations and technical systems and coding systems, and the potential to gather information, but to also engage in forms of machine learning that is almost seemingly simultaneous to us asking. So again, I guess I would say it's like an acceleration of a tendency that's been in place around technology for some time, and that we're running to catch up with what it's doing. We're seeing this in our teaching. I realize I should go back to my students’ essays and reread them, in terms of thinking about how much did they use chatGPT in terms of writing it. Is it just a tool that can be helpful for writing a first draft? I mean, which ones do we trust? So again, I love librarians. So when I have a question, I go to my librarian and I say, what would you recommend? For example, are there any AI tools I should know that would be helpful for me in doing research, because when you think of doing a Boolean search in Sofia, if that's the system you have, you're kind of basically using an algorithm to be able to get you the information you want expeditiously and efficiently. So librarians can be really helpful as researchers and I think the general public should have access. We should have a Canadian, not a Privacy Commissioner, who investigates things after something terrible has happened, but someone who's there as a kind of Canadian librarian to help us negotiate information, who can be there to answer our questions and to tell us what data systems we need, and can do the kind of vetting in advance if we have a question, we see the Increasing information. You know those freaking chat bots. It's like it's new, like phone hell. It's a new version of voicemail, hell. Frankly.
Sally Chivers 28:49
Tools like chat GPT rely on the data they consume. They steal that data, they ignore copyright. Those problems are getting bad press right now, as is the massive carbon consumption that fuels AI, but we don't talk as much as we should about how they rely on partial, biased data. Yes, that data is racist and sexist. It's also ageist. That seems like a problem that's easily fixed, right? Just feed it better data. I asked Kim about a student of mine who was talking to chatGPT about course readings to help him understand them better. He was not getting the machine to write his essays. He was earnestly working harder than most to try to do difficult readings for a graduate course in a second language. He used this kind of conversational, one might even say, Socratic method of learning to get there. Should I be happy that the machine will now know more about fighting ageism? Maybe. But I'm also worried about the copyright of the authors whose work I was teaching..
Kim Sawchuk 30:01
I've been asking myself this question for, I'll say, 30 or 40 years since I started being aware of marketing research. Do we try and create better products for ourselves by participating and answering the survey or not? Because the liability is, if you don't, then they just creating the same garbage. I always quote Donna Haraway under this. I will take the risk of trying systems out to see and find out what they generate, to be able to understand better from within, as much as I'm capable of, because I'm not a coder, what it is they do. I started to use a program that my librarian gave me called Elicit, which is very good for doing searches of abstracts, and then can do a whole range of things in terms of summaries and all this stuff. Really kind of helpful. But then I started to realize, God, it's like in terms of age and mobility, everybody's from Finland, where is everybody I know? Nobody I know is in here. So it's not about what's there, it's about what's not there sometimes, and then it's also then what clichés are then being regenerated and re represented and regurgitated.
Sally Chivers 31:10
Usually on Wrinkle Radio episodes, I ask my guests how the topic they're talking about applies to them. Here's what Kim said when I asked her what she does to stay safer from data harm.
Kim Sawchuk 31:20
well, I try and change my passwords, but my family will laugh at me because of my choices. For sure, I more carefully check on where emails are coming from. I know, because we have options about cookies, I avail myself of them and take time. Reading consent forms is still a big challenge. If I want something, I usually consent for better or for worse. I go on trusted websites, which can be awful, because sometimes you think, right, we want small entrepreneurs to be using the internet, but then we also know that larger corporations may have better data protection in terms of online shopping. So there's conundrums there. When I'm searching for information, I usually check a variety of sources to double check that information. I don't know if that's protection, but that's protection against malinformation. I think following RECAA stories, I've heard that if a deal is too good to be true online, it probably is. Don't trust a freebie. I limit my social media presence a lot. I only sign up for things I know are going to be extremely useful. I don't put private information social media. I'm careful about what I say. We saw just recently with the Trump assassination attempt, what's happened to a professor at UBC from what was a casual comment. So I think there's a whole set of comportments, behaviors, practices that we can individually do. I try and share stories and make people not afraid to share stories of things they've encountered that have been suspicious. I work on this intellectually and try and create public events that inform people, but also make them not feel ashamed to talk about these matters. And I think we need to do more work on policy, pressuring governments and come up with innovative solutions in advance, so that people who are not in universities, not in schools anymore, all of us can stay informed throughout our lifetime. So I like my idea. We need an office of the librarian in charge of helping us to work with the Privacy Commissioner and all other agencies who are not just protecting us because we're vulnerable, but realizing that the people who have a lot of power in terms of the data world have enormous resources and money. That's why they do it, to get money, and that we need to have regulatory agencies and governments that can help us both understand make better decisions and also monitor and track nefarious and bad actors who are abusing, misusing and really ruining people's lives. I'm so glad. I mean, the Canadian banking system, love hate relationship to it, but it's highly regulated, and so it's very stable. Look at other countries. It's a mess, and as we enter into an ever more kind of we call it the AI and digital swamp, we don't even know what creatures we're going to encounter.
Sally Chivers 34:00
That again is Kim Sawchuk of Concordia University. I don't know about you, but if I'm going to wade through a digital swamp, I want Kim Sawchuck by my side, even though I may decide to go in disguised as a cabbage. Thank you, Kim, for talking me through this morass. Kim had her own thanks to offer.
Kim Sawchuk 34:31
I really want to do a shout out for this to my colleague, Joanna Redden. It's really a lot of her work on data harms and the conversations we've had about the intersections between critical data studies and age studies that have really influenced my thinking and really deepened my knowledge profoundly of the language to use and how these systems work. So thank you, Joanna, and to my aging and data colleagues doing the surveillance studies project as well. .
Sally Chivers 35:02
This has been Wrinkle Radio. I'm your host. Dr, Sally Chivers, thank you for listening. You can find episodes, show notes and transcripts at Sallychivers.ca/wrinkleradio. Would you believe this is our two year birthday? If you want to give us a little birthday present, please go ahead and leave a five star review for Wrinkle Radio. You can do that on iTunes or Spotify, or pretty much wherever you listen to us. It feels really awkward to ask you for that, but it really helps out with the algorithm. And please tell your friends, tell your family, tell your neighbors, tell your trusted experts, and remember, don't panic. It's just aging.
Music 35:50
I want to grow flowers in a house that's made of you. Sit and watch the sunset, get some wrinkles on my forehead, I want to build fires