CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back 🔍 Digital Forensics May 09, 2026

DFIR In 2026 – AI ‘Button Pusher’ Forensics, Writing Courtroom Reports, Audio Breakthroughs And The Leica Geosystems Conference

Forensic Focus Archived May 09, 2026 ✓ Full text saved

Si and Desi discuss a range of digital forensics topics, from writing forensic reports that juries can actually understand, to whether AI is coming for “button pusher” DFIR jobs.

Full text archived locally
✦ AI Summary · Claude Sonnet


    The following transcript was generated by AI and may contain inaccuracies. Si: Ladies and gentlemen, welcome to the Forensic Focus Podcast, where Desi and I said, “What are we going to talk about?” and then promptly went off down a rabbit hole talking about stuff and not recording it. Desi: While we were talking, I actually found something to talk about, and I’ll send you this link while I go down this other rabbit hole so you can see what I was talking about. And we’ll also cover off on your conference that you went to as well. Si: Yeah. Desi: But we were talking just then before you hit record — how do you write a report and get information across to people in a way that makes sense? Because if you have, say, three computers doing things, and they’re all doing stuff at the same time, how do you structure that in a report so that someone reading it chronologically is going to understand the concepts that you’re trying to get across? Get The Latest DFIR News Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month. Unsubscribe any time. We respect your privacy - read our privacy policy. And I was bringing out the fact that this is because — how do we convey information that is essentially one data point at a time when we read? That’s how humans read. Whereas when we’re putting the information down, we have all the information in our brain mapped out. It’s this web of information that we already understand, and how do we pass that off to someone else? Visuals are a really good thing. I was thinking about this the other day. I was reading an indictment, and there were a lot of dates and a lot of things happening, and I was like, “Man, the DOJ — these indictments are so big. Why don’t they just make a graphic at the start that has a timeline of data?” The visual representation would help you digest the information so well, and it could be super basic — just a simple line on a page. Put a whole bunch of dots, summarize the points and go, “Here are the paragraphs that you go read for each of these,” and that would be super simple. DOJ, if you do listen to us, and lawyers who do all that stuff, please start putting in timelines. That would be really helpful for indictment reading. But I was bringing up to Si that there was a good example of this. There’s a sci-fi show called The Dollhouse, and what it is is these people who have committed crimes — they sell their bodies to be dolls, and they get injected with memories, and they give experiences to other people. It’s this whole dystopian world thing where the world collapses and the rich people just get to use these bodies again and again. Si: So coming to you next year is what you’re saying. Desi: Yeah. Well, Elon Musk’s brain chip made me think of that. But the premise in there — the scientists who figured out the best way to implant memories, because the way that it was done was it would implant the memories from birth to that point that they’d want to have that personality in the person, and that was very time-consuming. In the end they were just like, “Oh, we can just dump a whole bunch of memories in the brain, and then the brain will just figure it out.” Because the brain just pieces it all together as it needs it. And so when you were talking about that report writing, I was like, “Oh, that’s how it works.” I feel like it works in my brain — I read all this stuff where I’m doing an investigation or whatever, and I’m piecing it all together in my brain, and I can picture it all. But then when you write a report, I can’t just dump all the thoughts onto a piece of paper or into someone else’s brain. You then have to structure the report in a way that makes sense. Si: And the trouble is you need to structure it in a way that doesn’t make sense to you, but makes sense to somebody else. Desi: Well, yeah, somebody else, and also multiple people. You may not know who the reader is, or what technical level they have. Whether they’re neurodivergent as well, because different people will read things differently. And vocabulary choice — English is a terrible example of this, because we have to be really careful about the words that we choose. In a legal sense it’s a little bit better, because the language is very strict. But if you’re writing a report — Si: Well, it is and it isn’t. I mean, you still have the whole of the English language to work with. But the trouble is that the legal terms aren’t necessarily any more recognizable to the average individual. At the end of the day, whilst you’re — what you’re effectively doing is writing a technical document for a technical audience, it’s just a different technology and it’s a different audience. If you’re using legal terms, yeah, the judge might understand it. The jury’s still not going to have a clue. Desi: Yeah, that’s true. Si: So it really has to be… I actually have a book here. I have several books here. I have lots of books here, as you can probably see from the picture. Desi: And for our viewers, please notice that Si’s office is a lot more clean than it normally is, and for our listeners, it definitely is clean. We’ll ignore the fact that the clean is just because it’s all in boxes. Si: Yeah. And more to the point, the boxes aren’t actually in the room, so technically at the moment it’s really free. Except this book isn’t here, so it’s still in a box I haven’t unpacked yet. But I’ve got the New Oxford Style Manual for writing. I’ve also got a much smaller, much more concise book, which is unsurprising because it’s about writing clearly. There is actually a standard in England for clear writing. It’s predominantly used for manuals for things that you buy. You buy a new toaster, it explains it in English that people can understand. There’s a standard for it, which I have completely forgotten. I’ve lost the book, and it’s going to take me ages to find and put a link in the podcast for. But I will do that. It’s fascinating, because when you start to learn a foreign language, it’s like — how many words do you actually need to know to get by? And it’s about 500, apparently. Desi: Depending on the language, but yes, it is a finite amount. Si: Yeah. But when you’re writing, you do need to put yourself into the shoes of someone else. I’m quite good at not thinking like other people — when it comes to it, it’s like, “Well, I know what that word means. Why don’t you know what that word means?” But no, that’s not fair. I’ve got the benefit of a university education and all sorts of things. And the jury pool is drawn from everyone, and this is not fair. Desi: I was talking to one of our service engineers the other day at work, and I was talking about the deployment of our servers. He was just like, “Oh yeah, so you need the flower API that goes into something like the penguin database or something, which is the orchestra for…” Oh, no, it was the RabbitMQ, which is the orchestra for the canaries of the server that put it all out. And I just messaged back and was like, “To be honest, there was a lot of words in there that meant nothing to me. And all I’m picturing right now is the rabbit from Alice in Wonderland standing in front of an orchestra, conducting a bunch of servers that all have instruments in their hands.” It made zero sense. Si: And this is a real problem because we — the one I’ve come across is a very simple thing. I have spent a lot of my digital forensics career talking about forensic images, meaning bit-for-bit copies of disks. As soon as you move into talking about pictures, how does one address the word “image”? Because “image” now has two meanings — a picture, and a forensic copy of something. I’ve started trying to go back to “a forensically sound copy of this disk, and it contains these images,” and then refer to the pictures. But we have so many words that have different meanings. Desi: Even that in itself is problematic, because if you’re explaining that to someone who doesn’t know it, they’re just like, “What is a forensically sound?” As in, being able to hear it. Si: Yeah. You’ve used the word “sound.” Desi: Which has two meanings again. Yeah, when I was in the Air Force, we had this problem because we were starting out incident response in the Air Force, and people just didn’t understand. We called it DFIR — it was everything under the one basket. We talked about forensic images, and I remember my boss saying, “Oh, we need to put an explainer.” We essentially had to deliver a taxonomy or word reference with every report that we gave that was just 10 pages long, so people could look up words and look at the definition for it. We couldn’t come to the point where we were like, “Well, what else do we call it?” That’s what it is, and you need a certain level of technical expertise to read this. Even if you’re trying to explain to a layperson that doesn’t know DFIR, they still need that information regardless of whether or not they understand it. How do you get that information across? It turned into this reference document that we used to send as well that said, “If you don’t understand a word, please go look it up.” And the explanation was written at a grade five level, so it was as simple and clear as we could put it that could still make some sense. But yeah, it was challenging. Si: Yeah. I used to have a glossary attached to all of my reports. Desi: That was the word I was trying to think of. Yeah, glossary. Si: But it’s a nightmare to maintain. And also, if you’re doing it right, your glossary should only contain the terms that are relevant to the particular report that you’re attaching it to. You don’t need to know what a CD is if there’s nothing in there about a CD. And nobody’s seen a CD since the stone ages in that computing environment. Desi: That’s funny to think now — stuff that everyone used to have known is now being phased out, like a floppy disk if you ever came across one again. And CDs were another one. I heard this week that the last manufacturer of CD players has shut down. Si: It was Blu-ray players, wasn’t it? Desi: I thought it was CDs. I think it was Blu-rays. Maybe it was misinformation in reporting. Si: I maybe misread it because I saw that article, but yeah, you can’t get a Blu-ray device anymore. Which is a shame because that’s quite a good technology. You’re talking about, is it 6 gig to a disc? Which is actually kind of usable. Desi: Sony shuts down production of recordable CDR, DVD, and BDR discs. So it’s Sony. Si: Oh, right. Desi: Maybe it’s not everyone, but I guess they’re probably a major manufacturer. Si: I think that’s what it is — Sony has shut down, but they were the last manufacturer left of Blu-rays. Desi: Oh, okay. Right. Si: So both of us are right. Desi: Yeah. We’ll chuck this in the… I found a news article for it, so we’ll chuck all these in the show notes as well for people. That is a shame. I do like Blu-rays. I’m actually going more back to physical media now. Since streaming is so disparate and you don’t own anything — it’s all subscription-based — we still have the odd power outage and you’re like, “Oh, I can’t access…” Or the internet goes in maintenance and you’re just stuck. Si: Sorry, you don’t have all of your stuff on a UPS? Desi: No, I mean, the internet goes down. Si: Oh, the internet going down. Desi: But I also don’t have everything on a UPS. I’ve got my computer on a UPS, so I could still use that. Si: So over time, as UPS devices have come to end of proper technical life, they’ve been replaced with new ones, but the old ones have gone somewhere else in the house. So we sat somewhere — the lights went out, the whole street went down. You could hear the house alarms going off, because where we used to live, the houses had alarms, and when the power went out, they would kick off because they didn’t have backup batteries, or were old or whatever. But we were sitting there — the router was on a UPS, the Wi-Fi was on a UPS, the TV was on a UPS, the PlayStation was on a UPS. We just carried on watching. We just sat out this 20-minute power cut on battery power alone. Desi: I’m just imagining your household — you’re like, “Oh, I feel like some toast,” so you just carry this UPS over to the kitchen, plug the toaster into it. Si: I unpacked a box yesterday, and I’m not joking — I was like, “Oh, here’s a UPS. Where shall I put that today?” Desi: I have a CyberPower one, which I’ve had for a while, but I got that just for my computer. Being that I work from home, it was just important to continue to have access. Si: Yeah. I do a lot of work on my laptop, which obviously has a built-in UPS — it’s called a battery. So in that regard, that’s not quite so bad. But anything else, you want it to shut down gracefully, or not shut down at all. Otherwise, your data is at risk. I’m actually working on an old NAS here, which has got two lovely red lights for my disk spaces — probably because of something similar. Desi: Great. Si: It’s only got two disks in it, and both of them are failing. At the moment, the RAID seems to be holding, but I’ll get it off onto something else and then replace them. But yeah, I come with APC, the standard corporate ones. They do nice little home ones. They’re not too bad. Desi: So I guess we’ll move into the post that I sent you, because I think from our last talk with Rob, I have started reading his book, which I enjoy. It seems much shorter. Si: I just took it for a walk. Desi: Oh, you took it for a walk? Si: Yeah. We’ll come back to that for the conference in a minute. Desi: Yeah. I did say to Rob, if I go on a trip… Oh, I guess for the listeners — I don’t know whether we talked about this on the actual podcast last week, but I’m moving. Si: We didn’t talk about it on the actual podcast. Desi: Yeah. So it’s definitely confirmed now. Moving to Spain, so that’ll be a lot of fun. Si and I will get to go to conferences together. I know we do tables at some things, so it’ll be good to meet some of our listeners as we get out. I said to Rob, if I do a trip in Australia this year, I’ll take his book and take photos again, like his last one, at some iconic places. But yeah, Rob’s book, for those that haven’t listened to our previous podcast that got released — Rob’s wrote a new book on AI security and digital forensics, or AI digital forensics and how to approach it. Nothing technical so far. It’s more about how to approach emerging technologies. And I think Si’s gotten up to… Si: Yeah. Data collection, some forensics, and then — Desi: Yeah. So that’s out now. You can grab eBook, paperback, hard copy. Pretty much he’s upped his distribution, so you can find it anywhere these days and get it into any country. But I shared a post with Si as we were talking at the start before we started recording, which I think aligns with Rob’s book, and it’s, “If you suck at your DFIR job, AI is going to take it.” The premise — and we’ll link this in the show notes as well — is pretty much if you’re a button forensics person, and you’re pressing the button and then just taking the output without any actual analysis and putting that into words into a report, AI will do that part for you. Si: And it’s by Brett Shavers as well, so you know somebody who’s very — Desi: Yeah, and it’s on Brett’s Ramblings, his blog. He’s always got some really good takes. There’s a whole bunch of other stuff that came out this week from Brett, but this one caught my eye the most. I think it’s accurate. I think there was probably for a while there — jobs in every organization where people call themselves digital forensics people, whereas they were just pressing the button and then turning it into a report for their bosses, and they weren’t diving deeper into the data itself and understanding what they were getting out of the tool. Everyone that we’ve had on the podcast has always said that’s the difference maker between people who stay in DF and people who will transition out and won’t really be technical anymore — that thirst for knowledge, that understanding. Going beyond just the tool — it’s trust but verify, and you need to know what you’re doing to verify. Si: I think it’s interesting, and I’m going to play devil’s advocate on this one a little bit, but actually from a reasonably well-held perspective. First of all, I think it’s important that we make a distinction — and this happens a bit more in image analysis, and by image, I mean picture analysis, picture and video analysis — where LEVA, the Law Enforcement and Emergency Services Video Association (they made me memorize that for the test so I’d get it right) — they distinguish between a technician and an analyst. A technician — and I mean this with all kindness and due respect and skills — is a trained monkey. You take something, you pull it out, you point at the bits of the picture you recognize, and you put it in a report. It theoretically could be done algorithmically. It’s not about AI taking your job, it’s about the fact that the more we understand about a process from A to Z in terms of producing a technical output, the more of it can be automated. The point in time where we can plug a phone into a kiosk, hit one button — oh, you don’t even need to do that. Plug a phone into a kiosk and it gives you a report at the end saying all of these hashes match known CSAM. That doesn’t require any skill. It doesn’t require any effort, and for it to have been done by an automated process is perfectly acceptable. The distinction they’ve made is that where we need to go from technician to analyst, or technician to expert witness, is that at that point you need to interpret it. Desi: Yeah. Si: How did this get here? And to a certain extent, some of that is automatable. If you’ve got enough data in a given system, you could say, “Okay, well, this file was downloaded from this web browser at this point in time,” because I have all of the associated metadata and logging that tells me that fact. Obviously, the more chatty and verbose a logging system is, or a recording system, the more data you have to make those assertions. Sometimes, especially where you’re talking about analysis of a system which is designed for this — so let’s say something like a police intelligence system. There have been various cases in the UK where people have logged in, looked up their ex-girlfriend’s telephone number to see where they’re living now. Those are heavily logged systems, and they have a username associated with all the activity. Some automated analysis of that is also plausible and reasonable to assert that a particular individual did it, to an extent. Because at the end of the day, you’ve always got the — you can’t tell… And Brett Shavers’ book — I think it’s his book — it’s called… Desi: Is it Investigative Mindset or Investigative Strategies? Si: No, there’s one before that, quite a long time ago, which is Putting the Suspect Behind the Keyboard. Desi: Investigative Techniques to Identify Cybercrime Suspects? Si: Yeah, because the question at the end of the day is not can you show which username on the computer did something, because you can’t charge a username — you have to charge a person. The thing is, how do you identify an individual? And that’s something that no computer will ever be able to do, because it involves looking outside of the data that has been given. Desi: Yeah, linking the person and the persona. I don’t know whether they still use this term, but when I learnt it, it was linking the person and the persona. The persona was the username — the ephemeral digital data — and then linking the person who was doing that. Obviously, to some extent we trust that someone who knows their password — we’re trying to link that via the password or biometric. But then you’ve seen in really serious cases, like the Silk Road one where they found the guy, they made sure he was online and interacting when they were observing him using the laptop, and then they got the laptop before he was able to get rid of the data, right? Si: Yeah. They tackled him to the ground and somebody grabbed the laptop off him in a public library, I believe. Ross Ulbricht. Desi: I agree with you though. I don’t think that was a contrarian take on the technician side, because if AI is doing that stuff and putting it into a report, it makes their job easier, and then they go on and do more technician stuff elsewhere. They’re automating part of their job. If you’ve got a job in DF which is literally at your company, push the button, get the data, turn that into a report — I think that’s what will be phased out for sure. Si: Absolutely. And some of them will be narrowed down because you won’t need two people to do it. You’ll need one person to do it, because they could do it twice as fast by using AI assistance. Desi: And then in six months no people will need to do it. So both will go. Si: Well, you’ll always need at least one person, because somebody needs to plug the phone in. Until we have a little autonomous robot that’s going to do that bit for us. Desi: That’s what the technician’s for. So we have the technician already — their job’s getting easier. Si: So the technician, yeah. Desi: We’re getting rid of the DF analysts that were not doing DF analysis. Si: Yeah. But from a criminal perspective, I’m actually hugely in favor of this. The jobs market has moved consistently since the first Luddites tried throwing spanners in the works of the steam mills. The jobs market constantly changes, but what we’re facing is such a huge backlog of cases that if you’re talking about two years to analyze a phone currently, because there aren’t enough people to process it — the fact that you can plug it in and have your results within, let’s say a day, is a huge difference, and all of a sudden the legal system is going to have that data to work with. Now, it’s not analyzed, and I’m getting an interesting number of calls this year from police forces who are going to court with data that has been acquired in these sort of ways that hasn’t had any analysis done to it. The defense has questioned it, and then they’re doing retrospective analysis to deal with the issues. Desi: And is that data coming incomplete as well? Because it’s taken — Si: Yes. And again, we’ve talked about this before, but there’s a huge issue, and it came up in the conference the other day when I was chatting to someone. There’s a huge issue between respecting the privacy of an individual and getting data so that it can be looked at retrospectively. This happens a lot with phones, especially in the sexual offenses, stalking, harassment space. Somebody comes along and goes, “I’ve received these harassing messages. Take a copy of those messages, but leave everything else. I don’t want you to look at my phone.” For whatever reasons, and they have every right to do this — their privacy is fundamentally enshrined in law and is protected. But at the end of the day, the forensic analyst is not that different to a priest, in that we can’t talk about it, we don’t quite honestly care. Whatever you’ve done, I’ve seen worse, I guarantee it. Desi: And just as a counterpoint to that, the whole process of all that is horrible for that person. But if you’re on Facebook, Instagram — which is the same company, all under Meta — or you’re under Microsoft, and any of the data that you’re worried about someone seeing is in any of those platforms anyway, then someone, a human, has probably seen it. Meta got in trouble recently. There was some department in India and they were doing content moderation, which means they were seeing all of the pictures that were coming through. Whereas the terms of use say Meta may use or look at the data to do content moderation, but it wasn’t clear that a human was looking at it. People just assumed it was AI going through some image recognition thing. But then there were these people just seeing really sensitive and intimate photos from everyone that was using all the services. How paranoid are you actually if you’re using these services as well? Si: Exactly. But the fact is, at the point in time they’re going, “Okay, we’ve got three screenshots and we want to take this to court.” Well, it’s pretty hard to defend. I’ll do my best, but fundamentally you’ve not got a good chain of evidence for it. And also, the UK operates a system called evidence.com that’s run by a company called Axon, that make the body-worn cameras. Desi: Oh, yeah. Si: There are occasional issues with the way that it handles metadata. Sometimes it hasn’t been ideal with the way it handles metadata, and therefore you’ve got pieces of evidence that have been submitted to this and are being exhibited from this that are not as clean as they should be. And it becomes very problematic. Whereas if, even if nobody’d looked at it at the time, if somebody had taken a full forensic image of that phone, we could go back and pull the originals. We could go back and examine parts or aspects of it. Perhaps under a more specific warrant to access specific data, something a little more like a US system whereby you have to have a specific warrant to access specific data, rather than the free rein that we have now. You are limited to say, “I’m only allowed to look for stuff related to this, and therefore I have to document and justify what I’ve done.” We have to document it anyway, but the justification’s a slightly different thing. I think overall that would be restrictive in computer forensics, but I think in certain cases it would potentially allow for this to work. Then we’re in this position where we have that data for future analysis. And it came up during the conference, which we’ll segue to in a minute, where people were saying, “Gather as much information as you can because although we may not know today what we’re going to learn from it, in future there may be retrospective things that can be applied.” This could go equally well for appeal as well as further prosecution assuming it didn’t complete, or maybe a new test will be discovered in the currently two years it takes for something to get to court now. The example they cited — well, they cited two. The first one was DNA, which obviously we’ve been collecting for decades in terms of blood spatter and all sorts of things. The current advances in DNA processing means we’re able to determine far more from it and go back in some of the cold cases, get much better matches. We’ve been able to acquit people on the basis of DNA evidence that was gathered. If we’ve not gathered digital data, we can’t look at it. There’s no future recourse to it. Obviously, the inverse issue is that we can’t store it all because it’s so massive in volume, and the amount of data contained in the DNA strand is 10 times what’s on the computer, more than 10 times. It’s quite self-storing because you put it on a shelf and it stays there, and digital data requires electricity to maintain, so there’s cost implications. But we’re doing ourselves out of that possibility of future analysis and further assessment by not gathering this information. Desi: Mm. Si: But maybe this is where AI would make people feel better, because, as you said about Meta — they’re… While people think it’s not a human being looking at it, they’re much more happy for it to be done. If you had said, “What we’re going to do is take your phone. You tell us what you want us to gather. We’ll plug it into the AI and tell it that we want it to extract those bits. It’s going to do the work, and then we’ll show you the first draft. Are you happy with this?” And then we’ve still got the data, but the person perhaps feels that their privacy has been respected. I don’t know. It’s a difficult challenge. Desi: As we were talking about this, I can’t remember where I ever heard it before, but it comes from this inherent distrust — and I have this bias as well — this inherent distrust we have for the government and law enforcement in terms of being able to protect our data and our privacy. It’s just — I guess because it’s more transparent when something goes wrong, because the government or government agencies have to report, and then that gets reported on very heavily and it’s like, “Look at all this that they’re doing wrong.” Whereas a company like Meta — and we’re not hating on Meta, it’s just a really good example because it’s been in our lives for so long — they’ll data mine us and then 20 years later or 10 years later it’ll come out and be like, “Oh, Meta was data mining us when they weren’t meant to, even though it wasn’t in their terms and conditions.” And Meta’s just like, “Oh, sorry.” And then that’s the end of the reporting, because they’re very good at marketing and spin to move it away from that narrative. While we were talking about this, I was looking it up, and it’s called the privacy paradox — our trade-off between wanting privacy and also wanting the instant gratification that comes with these services that give us social connectedness online, but then will data mine us. The reason why we may lean more towards private companies is we feel like we’re getting something from them, whereas the government, we only ever see it as a net negative — they’re not doing good enough, they tax us, they’re taking our data, and we’re not really seeing a benefit out of it. Even though both are doing potentially the same thing, and I would say governments to a lot lesser extent, at least on the surface, than private companies, we still feel this way. Even if AI was doing it, I think there would still be the distrust in government and law enforcement for those psychological reasons. Si: Yeah. I was going to say it’s interesting on that level, inasmuch as something that somebody said — I don’t know who I’m quoting — “If you aren’t paying for it in monetary terms, you are the product.” Desi: Yeah, you are the product. Si: You are what is of value, and therefore get used to it. But I think it’s also — if we look at just popular media, the idea of an abusive government or an abusive police force is prevalent in so much film and TV that it’s almost what we come to expect when we interact with the police. The opportunity to see behind the scenes and see how dedicated officers are, how hard they work, the hours they put in, the amazing cases that they do put together, the investigative leads that they follow up on and tie together to make huge inroads into organized crime or down to individualized crime — one person against another. The stuff they deal with is phenomenal, and they deserve every bit of praise and support we can possibly give them. And then at the same time, they stop me for speeding, not for a long time I hasten to add, and I’m like, “I’m pissed off. Haven’t you got something better to do?” Well, no, actually, they’re protecting the safety of the roads and they’re absolutely right, and you know all of this. But you still feel like… Is it natural for humans to rebel against authority? We do it to our teachers, to our parents. Is it just an inherent human nature to be a bit arsey about these things? Desi: That is true. I feel like we could go down a rabbit hole there, and I won’t do that. Si: Yes, and that wouldn’t be unusual for us. Let’s talk about the conference. So a little secret — knowing that we didn’t have anything specifically scheduled to talk about, I thought, because AI’s been so popular at the moment, I’ll go and ask Claude. I gave it a prompt about the podcast. Desi: Maybe our days are numbered, Si. Maybe Zoe is slowly just taking all these recordings, feeding it into Claude or something, and she’s going to take our voices. Si: Wait for it. You’ll love this. It’s made you aggressively Australian, I hasten to add. The prompt I put in was, “Please review as many of the transcripts of the Forensic Focus Podcast as you can,” gave the link, and asked it to: A, summarize the overall details of the presenters, the contents, and the style; B, come up with some intro and outro blurb, because we struggle with that. Desi: We always just forget. That’s the thing. Si: And C, make a proposed list of topics of conversation for an episode without a special guest. So it went off and did it. “Thorough breakdown based on everything available across transcripts, episode descriptions, and platform listings.” It’s got our names, our roles, which is pretty accurate. It’s got you living in Adelaide still. Desi: Training data’s all — Si: A little career brief. It does say we’ve replaced Christa. It says, “There is a third occasional host, Paul, who has fronted some of the wellbeing-focused episodes, suggesting this show uses specialist contributors for themed series,” which is true. Desi: That’s because Paul’s only recent. Well, he’s been around for a while but hasn’t been super long. Si: Content: “The show covers the full DFIR spectrum. Broad topic clusters include AI’s impacts on forensic practice, mobile and cloud forensics, mental health and wellbeing.” Style: “The tone is distinctly conversational.” “Si opens with a characteristically loose, inclusive greeting.” And then it goes, “which sets the register perfectly. Informal, slightly irreverent, but substantive.” I was quite pleased that we’re substantive. “The transatlantic pairing, Oxford–Adelaide, creates a natural dynamic.” I was a little offended at this. “With Si tending towards the analytical and slightly drier, and Desi more animated and expansive.” Then it went, “Episodes run long.” “The end-of-year wrap-up ran to 93 minutes, and are clearly unscripted. Tangents are common and welcomed.” Desi: Oh, thanks, Claude. Si: So the intro it suggested was for me: “Ladies and gentlemen, boys and girls, and anyone else tuning in, welcome back to the Forensic Focus Podcast. I’m Si Biles, joining you from a slightly damp Oxford.” Which it is — it’s tipping it down outside. “And with me as always is Desi Desmond, coming to us from a considerably sunnier Adelaide.” And then you lead in with, “G’day, everyone. No guests this week, it’s just the two of us rattling around in the digital forensic space, which means we can go wherever the conversation takes us, and it always takes us somewhere.” Going on to its suggestions for the topics — “what does AI-assisted actually mean” is one of the ones it said. “Backlogs: is anyone actually solving it,” which we’ve sort of talked about briefly. “Expert witness credibility crisis.” It’s just fun anyway. It’s pretty good. Desi: Don’t get rid of us, Jamie. Si: Yeah, my voice is copyrighted. I want this noted. I’m actually part of the class action lawsuit against Anthropic because one of my books was used in their data set. Desi: I remember. I think we talked about this when it happened, when you joined. It was a while ago. Have you heard anything? Si: Yeah, so that’s still ongoing. I understand the final closure of the applications for restitution is the 30th of this month. So hopefully at that point they’ll count up how many there are and go, “Right, 1.5 billion divided by 1.5 billion means you’re getting $1 a piece, or something.” Desi: I love how AI companies — it’s been proven that they shouldn’t have done it. You can’t just scrape data and then commercialize it, and they’ve done that. And now they’ll get a fine that is nothing compared to their revenue. Si: And I just want to say, I was using ChatGPT. I still use ChatGPT. But I switched to Claude to give it a go, and I think the quality of output from Claude is way, way better from Anthropic than from ChatGPT. Desi: Claude is top two, yeah, opened our eyes. Si: So clearly, scraping quality training data to put in creates a better response. That’s all I’m going to say. They’ve played a blinder on that front, and if they’re going to pay out a little bit, it might cover my cost and my subscription to it for a few months, and that’ll do it. But I’ve expanded my use of AI. Again, it’s not being used in an investigative capacity — that’s inappropriate — but I’ve been using it to write code, and Claude is bloody good. There’s a Strongman competition coming up on Saturday down at the gym. I’m not competing or even assisting in it, other than the fact that I had Claude write an entire Strongman scoring system from scratch in about three hours with a couple of prompts and enhancements and everything. It does competitors, scoring on the sign, automatic points and tie breaks and all of this for competitions that range from maximum number of repetitions being the highest score, to the maximum weight being the highest score, to the maximum speed being the highest score, to the fact that if you don’t do it in 60 seconds, what the maximum distance that you do achieve is the highest score of the remaining ones. So it’s a 20-meter carry. If you do it in 25 seconds, you get a score. If you do it in 20 seconds, you’re faster. But if you don’t complete it but you get it 10 meters, you’re better than the person who doesn’t complete it and gets it five meters. All of that needs to be consolidated into a single score for a single event. I got all of this intelligence, the backend scoring, all of this done, with a usable web interface that runs in a Docker container that I can throw up on DigitalOcean. I got it done in a couple of hours, and it works, more to the point. I had got some code working with ChatGPT, but the iterative process to do it was a nightmare, and it certainly didn’t understand the concept of putting something in a Docker container. Desi: We’re still definitely in the days where there’s no one-stop shop. You’re using tools for their specialized purpose. Claude is coding. If you want image generation, you’re better off using things like NanoBanana, which comes from Google, or some of the other tools. And then there’s specific writing tools if you’re wanting to write a report, there’s a better tool for that. ChatGPT tried to be everything, and it’s not very good. Si: Yeah, I still use ChatGPT. Claude does no image generation at all, so that’s already a non-starter. ChatGPT still does that. It’s getting better at that. It does English a bit better if you’re asking it to write something. But generally speaking, I’ve more or less stopped using it now, except for image generation. Although it’s quite fun taking the output of one and feeding it into the other and asking it to critique it, and then passing it back with the criticisms and asking it to refine it, in both directions. Why not? It’s a good laugh. But I think — moving slightly in a tangent — we need to do more research on the artifacts, the forensic artifacts that are left behind from something like Claude. Because if somebody has used it to develop a tool quickly to do something dodgy — in this case, probably given it’s writing code, Computer Misuse Act breaches, hacking stuff — but then they’ve deleted the tool, how much can we retrieve from what’s left on the device of history and usage to determine what use has been made of it? Has it been asked to do things? I think that’s a piece that’s currently lacking in the marketplace. Desi: In the academic market. Si: In the academic market, yeah. Desi: It’s bleeding-edge incidents. We’ve seen evidence — threat companies link nation state actors to using AI to generate tools, and this is through how the code is structured and particularly the comments, because they haven’t removed any of the comments. They’re matching — it makes me think of handwriting matching, because they’re like, “Oh, these comments, the way these are stylized is typical of this tool to do this.” Si: Yeah. Desi: And from an endpoint perspective, the challenge that we’re facing at my work, which is in the realm of incident response, is determining which actions are done by the human and which actions are done by the agentic process. We’ve now got monitoring where we can see the agent’s thoughts, because at the moment, the most efficient way for agentic processes to work is they’re installed locally on the endpoint, and then they reach out for thought process to the cloud. They get the answer back and then do it on the endpoint, because that’s the most efficient way to do it. If you can see the thought process, you can see that. Then there’s the process history, but sometimes the user is in the command as well. But you can see the thought and then the commands come through, and it’s running those quickly, so it’s faster than human reaction speed as well, so you’re looking for that. But then you’re seeing it pull down different binaries and libraries and pulling this all in. So there’s still the question of who’s doing it, but you’re seeing in real time all this evidence come through. But the only reason you have that is because you’re intercepting the thought process of the agent to see what it’s trying to do next. Because otherwise, if you didn’t have the thought process, you’d see a couple of things be pulled down, some commands run on the endpoint, and you’re like — what of this was the agent and what of this was the human? And how technically smart was the person versus how smart is the agent doing all this? Si: Yeah, and it comes back to what we were saying about Brett’s book — Putting the Suspect Behind the Keyboard. All of a sudden, one of the potential suspects has become bloody SkyNet. I’m going to say it again, I haven’t enabled it because I have huge reservations about it, but Claude’s Cowork option, which will get into a folder and sort the files and everything — Desi: Oh, yeah. Si: Sounds fantastic. I wish I could turn it on. No way on a personal computer or anything with work stuff would I let an agent loose. Desi: Exactly. But I will point out, there’s people who are giving these agents their credit card numbers. Si: Did we talk about the little group chat for the Claude bot? Desi: Oh, the OpenClaude stuff? Their own little forum on Discord? Moltbook? Si: Moltbook, yeah. That is just terrifying. Desi: Yeah, the front page of Agent Internet. It’s like a Reddit for agents. But people who are giving their credit card details and their logins to things — I’m just like… But again, it’s marketing. Talking about this, I was doing some research today. I was looking into Fellow AI, which is an agentic browser. It’s their own process. It’s all integrated. It’s actually really good at doing what you tell it to, and it runs on the endpoint, so it’s quite flexible. It’s a private company, not open source, and you can pay for more features. It does a really good job for free straight out of the box, which is scary, because I’m like, “I didn’t think we were this far along with agentic processes for the consumer this early.” There’s another one called BrowserOS, which is an open source project. I installed it onto my Windows 11 VM today to do some testing, and Microsoft Defender had a field day. It was deleting so many things out of the folders where it was installed. I looked at all the ratings, and it was all severe, and it was like, “This process will… this DLL will allow remote attackers to execute on your endpoint.” I haven’t dug into the code yet to see what it is, but I assume it’s just common tools that attackers would use to remote onto an endpoint and attack, because it’s doing it all. But it was interesting. BrowserOS and Fellow AI are essentially the same product. One’s just open source, so it’s more exposed to the endpoint than Fellow, because Fellow is a self-contained binary. Windows is deleting everything out of BrowserOS, and I was like, “Oh man…” I’ve tested both now. BrowserOS is not as functional as Fellow. Fellow’s got way more functionality. Imagine if you could install the open source version of Fellow — it would probably do the same thing. Wait until someone cracks open one of these and compromises millions of computers that have these installed, and just goes on a field day and makes a botnet. Or not Fellow, but any company that’s not investing as much into security as a company like Anthropic. Claude is doing really well on the security scales of protecting their product and add-on endpoints and stuff. Si: Well, we can say they are now, but the only reason they’re doing that is because it was shown up to be horrific in the first place, wasn’t it? Desi: I think they had the same issues as every other AI company. They were one of the forerunners with OpenAI. They were all suffering from prompt injection, all this kind of stuff. I always thought of it like Windows PowerShell — it’s not a security tool. Don’t treat it like it is. You either have to monitor it or turn it off. Don’t think that it’s going to be secure. But Claude, at least from what I’ve read, seems to be the only one that stays on top of all the stuff. When it comes to AI agents falling victim to exploits, Claude seems to be the only one where it picked it up and stopped it. The investing they’ve done — they’ve realized they’re used for coding enterprise databases, and people want to protect those, so they put in the investment to stop people jailbreaking their product. But commercial companies — and I’m not pointing the finger at Fellow, there are plenty of other tools out there — they’re there for the consumer to get use out of the product. If they become a target of a nation state or someone else, I’m assuming it’s going to be pretty easy to find exploits in them compared to other tools. Even look at Microsoft — they’ve got Patch Tuesday because they’ve got vulnerabilities all the time. Si: The larger and more complex the program, the higher the probability of there being serious bugs, and then exploits within that. There’s no real way to avoid it. Desi: Anyway, that’s what I’ve been up to since the beginning of the year. It’s just been a lot of AI security stuff that we’ve been working on, which is fun. It’s interesting. Si: It’s a growing market, if nothing else, isn’t it? Desi: I mean, AI’s coming for my podcasting job, so I need to make sure I stay relevant. Si: Well, that’s it. Using that as a segue to talk about the conference. The opening talk was actually our friend Martino Jerian from Amped Software who opened up the conference with a talk about AI-generated imagery, which was, again, always great to listen to. He uses a fantastic example of a faked image of him with hair, which is quite disturbing. The conference itself was the Leica Geosystems Conference. Leica Geosystems make LIDAR equipment — amazing LIDAR equipment — that they use for all sorts of scene reconstructions from fire and arson investigations, bomb explosion investigation, road traffic accidents, murders, shootings — all sorts of things where physical scene capture is a relevant concept. They run this conference for free, astonishingly for free. Although the users who come, the people who give talks and attend are often users of Leica equipment, very often they’re not. They’re quite happy to let people who use competitors’ equipment come and talk about their work. It’s de rigueur not to mention brand names in particular, but it’s like that. People came and talked about a range of things. We had stuff from Amped. There was a very good talk from a guy called Henry Vega, an American, who I’m going to try and get on the podcast. He’s been doing work on audio analysis. I’m not going to preempt it too much, because I want him to come on and talk about it — I loved his talk. It was fascinating. They’re taking audio from car accidents, where you can’t necessarily see the vehicles or what’s happened, but being able to derive the speeds from it, because of the sounds he’s able to capture and isolate. Chain of events and stuff like that. But also using the sound of bullet casings falling and hitting a surface to be able to determine what caliber of weapon has been fired. Desi: Practically, where would you be pulling this data from? Where would you be getting the sound capture? Si: The examples he used — one of them was a Ring doorbell that was recording, but it didn’t actually see the event itself. It had been triggered by something else, and it captured the audio. What actually happened was a truck turned a corner. It triggered the recording, and then somebody hit the truck behind. You couldn’t see it because the end of the truck was out of shot. But it captured all of the audio up until that point. And gunshot stuff is everything from people videoing a scene through CCTV footage that captures audio. Some places in the US used to have — I don’t know if it’s being deprecated — but they actually have recording devices on the street to identify and respond to gunshots. Which is deeply concerning in its own right. But I want him to come on and talk about it because his research was fascinating. He’s great company. We had people talking about fire investigation. I will write all of this up more formally and stick it in a report for the Forensic Focus web pages. We had somebody come along from the Swedish police talking about the reconstructions they did of a mass shooting at a school in early 2025. Ten people were killed, six were injured at an adult learning center. They reconstructed that without CCTV footage, but managed to string together some audio recordings and pick up gunshots from audio recordings to reconstruct a chain of events and location of people in the building. Fascinating work. And also the quality of products they turned out for, in the end, the media — not because it didn’t go to court, because the perpetrator shot himself at the end of the event. But being such a public thing, they had to put something out to the press. The quality of the material they produced was astonishing. They were using Unreal Engine to animate their walkthroughs, and it just looked incredible. Marcus Rowe and Dan Prewitt from Leica gave a talk on a TV series thing. I’d like to get them on to talk ab
    💬 Team Notes
    Article Info
    Source
    Forensic Focus
    Category
    🔍 Digital Forensics
    Published
    May 09, 2026
    Archived
    May 09, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗