Forensic FocusArchived May 14, 2026✓ Full text saved
Join Emi Polito, Forensic Analyst at Amped Software, for a look at Assisted Redaction in Amped Replay — a new tool that automatically detects people, vehicles, and license plates to streamline complex redaction tasks.
Full text archived locally
✦ AI Summary· Claude Sonnet
The following transcript was generated by AI and may contain inaccuracies.
Emi Polito: Hello everyone, and welcome to our webinar on speeding up your video redaction with Amped Replay. Just a quick introduction about me — I work for Amped and do mainly support and training. I’ve been doing it since 2022, and I’m formerly an imagery analyst.
I worked for a number of police forces in the UK, namely Bedfordshire Police and Essex Police. They work in counties in the United Kingdom, and each county has got its own police force. I also work privately — I still do quite a bit of forensic imagery analysis and processing, like enhancement and authentication, for police forces and defence solicitors on commission.
Occasionally I do redactions, which is the topic of today’s webinar. We’re going to see how we can simplify what is quite a tedious bit of work, especially when cameras move a lot. There are a lot of people whose identity we need to hide or protect on video. It’s a legal requirement in a lot of situations, but it can be quite tedious and fiddly.
We’re going to look at how we can simplify it and make the process easier whilst complying with regulations. I’ve got a Bachelor of Technology in Multimedia Systems Engineering, and I’m also a LEVA certified forensic video analyst. Random fun fact — I used to work in television back in the day, doing fashion shows and music documentaries.
Get The Latest DFIR News
Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.
Unsubscribe any time. We respect your privacy - read our privacy policy.
You may ask yourself, “How did you end up in forensic video?” Well, I actually like it quite a lot, and I’ve been doing it for about fifteen years now.
Right, so we’re going to look at some of the most common issues we encounter in video investigations, and why we have to redact video and audio to comply with regulations. One of the biggest problems we face when doing redactions is that there’s no skill set or experience sometimes with dealing with video forensically.
There are a lot of situations — for example, in the United Kingdom where I’m based — where there might be administration staff working for the criminal justice system, or personnel working for Freedom of Information Act units. They have suddenly got video on their desk, and they are required to redact faces, hide the identity of people, and remove bits of sound that reveal witnesses’ details, telephone numbers, addresses, and so on.
They may not know what they’re doing — that’s not their primary job. So that’s what we at Amped can help with: developing products that can be operated by personnel who are not necessarily experts in video, but who are required to produce video evidence and do so forensically.
One of the other issues that gives a lot of people headaches is that the video format a lot of the time is not supported. This happens the most with CCTV footage — digital CCTV — because a lot of manufacturers choose their own way to deal with that video, and sometimes you just can’t play it. You need proprietary software produced by the manufacturer, and it gives a lot of people a lot of headaches.
Luckily, the material we have to redact most of the time — for example, body-worn video — is usually in an open format. But you may come across CCTV that’s difficult to play, and having to redact information from those as well, hide faces or things like that, we can deal with that. One of the things we do at Amped is understand a lot of proprietary formats and play them, making them playable just as easily as dragging and dropping files into the program.
The other thing that can be problematic is watching very long CCTV clips in order to find evidence of crime. This is a plague for a lot of investigators — having to sit down in a dark room and watch hours and hours of CCTV, hoping to find some evidence. A lot of the time, video needs to be watched, especially for the most serious crimes, but nothing of relevance is found.
That still needs to happen, though, so there’s still a lot of hours invested in that process even if nothing good comes out of it. It’s a process that needs to happen in order to exclude things from an investigation. Another issue is being unable to redact sensitive information or quickly annotate the video.
This is work that in the past used to be done by video editors or people who work with images and videos full-time. But now there’s a need for police officers and investigators — any sort of police staff — to publish an image on a website for an appeal, or something like that. It needs to be done, and it needs to be done quickly.
Another issue is incorrect techniques or insufficient software to correctly capture individual frames or clips. We refer to this as acquisition — how you load the video and how you export it after you’ve made annotations, redactions, or enhancements. Choosing the right format on how you export something is also very important, because you want to retain quality.
It’s not just visual quality either — it’s also reliability of timing information, frame rate information, timestamps, et cetera, that we need to preserve. Then there’s another issue of no experience producing forensic reports or auditing logs. All these things might make you think, “I haven’t actually thought about that,” but they happen in real time.
These are all things we have thought about in advance and that we can help with, even in a product like Replay. And of course, there are many videos but little time available. We need to make the process not just easy but quick — that’s very important in policing, time efficiency.
Let’s deal with something specific: artificial intelligence in policing. As you probably know, there’s a lot of development with AI, which is basically a computer that has been trained to find data, create data, assess data, or analyse data — so it’s gone through a process of training.
There are a lot of AI tools used at the moment in policing — for example, to find encrypted chats in text messages. Drug traffickers use a code to communicate with each other; if you have trained a computer to understand this jargon, it’s likely going to find you a lot of information that would otherwise take a long time to identify.
But what you may not know is that there are now regulations in place for the use of AI in policing. You can see here a snapshot — this is a document released by the College of Policing, which is an organisation in the UK that provides methodologies and recommendations on how to use certain tools, in-house or outside.
You can see some of the principles laid out for the use of AI tools and AI software in policing. It’s got to be lawful, transparent, explainable, responsible, and accountable. So before you use a tool that uses AI, it may have to comply with these requirements, and there will be ways to validate how those tools comply.
Why do I talk about AI in this specific presentation? Because we’re going to see in a minute that in order to help you with redacting footage — i.e., hide faces, vehicles, license plates, or things of that nature — we’ve implemented machine learning methods, an AI tool that recognises people’s faces in video, recognises vehicles, recognises license plates, tracks them around your shaky video and automatically redacts them.
We’re going to see how we’ve made it in such a way that it complies with several regulations in policing. There are strict regulations — this might not be internationally, but certainly all countries now have to adhere. For example, in Europe, there’s now the AI Act.
In fact, Michelle and I just came out of a company meeting where we’ve got loads of webinars in regards to the AI Act that we’re hoping to release further down the line this year. There are regulations we like to comply with.
Now, one of the main things about using AI tools — first of all, there are AI tools that can process images, enhance images, or clarify images. They are forensically very dangerous to use. The reason is because those tools don’t just use the original data in your images — they actually add data that comes from other sources, all from training. Forensically, you can understand how that’s very dangerous.
But there is no doubt that for things like finding people’s faces or finding license plates for the sake of redacting them, such a tool that has been trained to find such things can be very beneficial and safe — as long as it’s got some safeguards in place, and we’re going to talk about that in a minute.
What that means is that AI tools will make errors, just like humans, and we cannot rely on AI tools blindly. We’ve got to check them and make sure that they did what they were intended to do, and if they didn’t, we can correct certain things quickly and efficiently.
Another thing that has been reported as being difficult with the use of AI is reporting its actual usage. We’ve redacted some footage — but when did you use AI and when did you use more traditional methods? We’ve got to explain this and have it on a piece of paper or in the document we then disclose with our evidence.
Another impediment is that a lot of the AI models, because they are so big in size, are usually online. It’s not unusual that you may access, for example, a tool that can generate imagery by a text prompt — you’re typing what you want the model to create in an image, and then that goes onto the internet to look for the appropriate model in order to generate that data.
In policing, that can be a problem, because having your forensic machine connected to the internet can be dangerous. Some countries — for example, the Netherlands — have got strict guidelines where forensic machines cannot be connected to the internet.
You will see how with Replay, the main AI function — that is, the redaction features in the video — is all offline. You install the program, and once you install it, everything you need is in that offline environment. You don’t need to go onto the internet.
Our solution for all these problems is called Amped Replay. It’s one of the five programs we supply at Amped Software, and it’s designed for police officers and investigators who are not necessarily experts in video and need to do annotations or redactions, or it may just be a case where they need to view footage to find evidence of a crime.
All those issues we talked about before — the difficulties sometimes in loading video, the tedious work of redacting or annotating — we try to simplify in this solution, Amped Replay. We refer to it as the forensic video player for non-technical personnel, so non-video experts.
It supports over 300 video formats, using the same conversion engine as other Amped products such as Amped DVRConv and even Amped FIVE, which is the most well-known Amped product. So you can easily convert and play videos — you don’t have to worry about the format, you just drag and drop it.
It’s got a very traditional motion detection tool which can help you find suspect movement in long surveillance videos. This can really help when you’ve got hours and hours of video and you’re looking for a person crossing a road in a specific area, or a vehicle passing by. You can apply some techniques for the program to find that movement using traditional forensics. This doesn’t use any AI — it just recognises how pixels change from one frame to another.
The thing about Replay is that it’s got a simple interface. You’ll open it and immediately know how to use it. It’s designed so you don’t have to look at manuals or quick start guides — you can just open it up and use it, no problems at all. It’s great for anyone who touches video.
Even non-technical staff, like administration staff who deal with data protection requests or Freedom of Information Act requests and need to redact video or sounds, can use Replay. You don’t need to be an expert at all. It’s basically a player, but it’s also got advanced annotations and redaction features, and integrates fully with FIVE.
For those of you who know, Amped FIVE is the program that Amped supplies for more advanced enhancement. Amped FIVE is more suited for video experts — people who understand problems in images, limitations, resolution, compression, distortion, all the kind of issues you get with imagery. So Amped FIVE is more designed for that.
What that means effectively is that you can start a case or an investigation in Amped Replay, and then if your organisation has a more advanced unit that deals with video, that project or case can easily be imported into Amped FIVE and continued or corrected from there.
In regards to redaction, we’re going to look at how there are several methods to track and redact in Replay. Some of them use traditional pixel recognition, so non-AI based. Some use what’s called keyframing — registering the position of people or vehicles moving on the screen, then generating linear movement between those points. That’s very common with video editing.
We also now have what we call Assisted Redaction. We call it Assisted Redaction for a reason: yes, it uses a machine learning model, but it does all the tedious work for you so that you can review it. As a human, you can review it, and if the model has made any errors or missed anything, you’re there to correct what the AI engine hasn’t picked up on.
We made it this way because it then complies with some of the regulations around in policing for AI tools. It makes errors, so how do we reduce the risk of, you know, a child or something that cannot be seen going out in court because somebody has missed it? We’ve made a system that implements AI to do all the tedious work, and then makes it easy for you to review and correct.
What that means is that you’re still in charge — you’re still responsible for the work, and that’s just the way it has to be in forensics. It’s designed to be easy to review and correct, so a human reviews the work the AI model does to find persons, vehicles, or license plates, and then it’s easy to correct.
A person might have been identified by the AI model but missed by a couple of frames — then it’s just a matter of extending that redaction back and forth a couple of frames, and it takes seconds to do. All uses of AI are reported transparently to comply with regulations, so in reports we generate when we produce our evidence, there will be clear indications of when and where the machine learning model has been employed.
It’s quick and easy to use, and everything is installed offline, so you don’t need the internet. As long as you’ve got the installer and a valid licence, you just install Replay and you can start. No internet access required.
Some of the benefits: it performs common tasks to analyse video evidence during the initial stages of an investigation. That’s really what it’s designed for — the early stages of an investigation when you are in a rush. You may need to export images from a video, but you need to make sure you’re redacting third parties not relevant to that investigation. That’s where Replay becomes really effective.
It can quickly get an image out of the media for identification or public interest. Sometimes I see video screens being filmed by a mobile phone or a body-worn camera — that’s the worst way you can acquire video. That happens because a police officer might be on the scene and might not know how to download the CCTV. That’s how it’s captured, but of course, that might have big repercussions.
With Replay, you just drag and drop, and that ensures your final product — the image you export from the video — is pristine quality, as close as possible to the original. We can redact and annotate easily, even in shaky videos, and that’s where Assisted Redaction can be really helpful.
Body-worn cameras, worn by an officer, will be very shaky. Trying to track a person when the video is that shaky can be quite tedious, and that’s why it can be really helpful. You no longer have to rely on office hours and availability of dedicated units for basic tasks. That’s a common problem — “Oh, the video unit only works Monday to Friday office hours, but I’ve got this investigation on a Saturday.”
Some of the evidence, especially here in England, suggests that the use of Replay is also beneficial for video experts because a lot of cases don’t need the experts and can be offloaded, allowing them to concentrate on the work that can only be done by the experts. We’ve already had feedback that the use of Replay in force is extremely beneficial for that reason, even for the more experienced video units.
Through the integration between Replay and FIVE, if you’ve got a system in place where police officers redact in Replay and there’s a peer review that needs to go from the video unit, the video unit can open the Replay project in FIVE, make corrections and adjustments, and be the final link in the chain of that evidence production.
It’s easy to correct or amend redactions previously done by frontline officers — and there’s reassurance that videos are processed correctly with a controlled and restricted toolset. So with tools that are designed to deal with video forensically — that’s what we’re here to do, that’s our business.
It’s powered by the conversion engine that understands formats and can play them or convert them without losing any quality, or with as little loss as possible. Sometimes there can be a loss when you convert something, but it makes it as close as possible to the original. It integrates with Amped FIVE for further processing.
Just in a nutshell, for those of you already Replay users, I’m going to keep this very quick because I want to jump on Replay and show you what you can do. We now have a new Redact tab — this is new-ish for about a year. Before, you did the redaction and annotation in one single button.
This is your Replay interface. As you can see, you’ve got the video in the middle. You’ve got buttons here that allow you to do playing, enhancing, redacting, annotating and exporting — as simple as that. You can see why it’s easy. Here you’ve got tools and drawing tools that we’ll have a look at in a minute.
Before, we used to do redaction and annotation in the same area, but now they are separated. They’re separated really because we need to ensure that we redact before we do any annotation. That’s why the Redact button comes before Annotate. We’ve got, of course, the new Assisted Redaction model, which I want to emphasise uses a machine learning model. We’re very transparent on that.
There are some annotation improvements. You can now feather selection and also invert selections. You can see in this particular example — a Freedom of Information Act request. If a person who has been caught on camera doing an offence requests all the evidence the police have on them, they have the right legally to request all that evidence because maybe they want to speak to their solicitor.
So with Freedom of Information Act, you basically need to hide the identity of everyone identifiable apart from the subject of the request. It’s just a case of redacting visually everything but that subject — that’s why we’ve got an invert selection. As you can see here, everything is pixelated, redacted, so you can’t identify any license plates or any faces apart from the subject of the FOI request.
The reports you can generate in Replay — which can be as important as the actual evidence, especially if it goes to court — have had a complete revamp. They’re now much more streamlined. Things are easier to find, there are chapters, and it’s much easier to read.
You can do things like bookmarks customisation — that’s more for doing a storyboard, a chronology of events where you illustrate the frames of a video that might be relevant to a case. There’s improved motion detection — if you’re looking for motion in a specific video, maybe you’ve got evidence that your suspect or witness has crossed a specific area, you can select an area of interest, and there are improvements there as well.
The audio redaction has also been improved, so we can do video redaction and audio redaction, or do both at the same time. With audio redaction, we’re usually getting rid of little sections of sound that maybe reveal the address of a witness or victim. You certainly don’t want the personal information of a victim or witness in the courtroom for the defendant to hear.
You can also do things like watermarking. This can be very efficient if you want to make it clear that you’ve used a specific tool like Amped Replay to produce video evidence. You can have it across the whole thing, or just at the top left — you can customise it. The graphical interface is also customisable in itself — you can make panels bigger or smaller. This is all relatively new.
We’ve improved the way we convert and export media using a format that’s much more forensically robust and modern than its AVI predecessor. You can also animate annotations and redactions — we’re going to have a look at that in a minute.
I’m going to wrap up this section now. It’s taken me half an hour, so I’ve got a little bit of time to actually show you Replay in action. When you install Replay and open it, this is what you get — a nice streamlined interface with the buttons we saw earlier.
The easiest way to load video into Replay is just dragging and dropping. For example, I can drag this clip into Replay — open Windows Explorer or the desktop, make sure you’ve got both visible at the same time, so you can drag from one location to another. By dragging, I mean clicking and holding the left mouse button and dropping it into Replay.
As soon as you drag and drop, you’ll see that the button highlighted is the Play button. This is the button that allows you to do viewing and bookmarking. Effectively, you can do that everywhere you are in Replay. But the Play tab, for example, has got motion detection which allows you to detect motion. If you click between the buttons, you’ll see this part of the interface remains the same — what most changes is the panels on the right-hand side.
What that means is you can still view the footage even if you’re doing enhancement, redactions, or anything like that. At the bottom you’ve got the waveform — a graphical representation of the audio gain in your timeline. Here you’ve got the start of the video, and here the end. You can see these videos go silent, and more or less in the middle there’s some sound which is louder than other areas. This is useful because it allows you to do audio redactions quite easily.
We’ve got the Enhance tab here. If I click on that, this has got very easy and safe-to-use enhancement tools. One of the most common things you may want to do is lighten up a video, or change the contrast between foreground and background. For that, you can just use the Light filter. You click on it and get a drop-down box where you can choose, for most filters, whether to use automatic or manual modes.
The automatic mode is great for a person who has no idea what they’re doing with the video — you can just click on that and let the software automatically assess things like pixel values and luminance, and change them across the whole video. Or you can choose manual mode — then you’ve got more parameters and sliders. Very easy: contrast and brightness. If you want to make your video brighter, you just drag the slider.
What that does effectively is change the pixel values. Remember, digital images and videos are basically a combination of pixels — small elements, the smallest elements — and each one of these pixels has values of luminance and colour. If you want to make your image brighter, you’re changing your luminance values to increase, so they’re all a bit brighter. That’s effectively what brightness and contrast do — they change pixel values.
Why is it important to do this? Because in a minute we’re going to be redacting this video. We’re going to detect all the faces and then blur or pixelate them out. We want to give our Assisted Redaction model the best opportunity to do that. If we’re telling Assisted Redaction to look at a video that’s very dark — and now I’ve deliberately changed the brightness to be very low — the model is going to have issues finding faces.
What we’re doing is preparing the video for what comes later. We want to make sure we’ve got a decent brightness. We don’t want to go too much, because when we do too much, we’re saturating — making everything reach maximum white and washing out detail that was previously there. We want to find a comfortable compromise where we can clearly see faces without losing facial details.
This video is very short, so Replay just gives you a snapshot of the essential information — for example, how many frames there are in the video and what frame we’re at. You see that red line — that’s my playhead. I can click and drag it across my timeline to quickly scrub the video. As I scrub, you can see the frame number changing — now looking at frame 109 out of a total of 134.
At the bottom you’ve still got controls like Play, so you can play the video at its intended frame rate. You can analyse the video frame by frame using the previous and next frame buttons. There are shortcuts for these too. Then you can take bookmarks — useful if you’re looking at a long video and finally see your suspect, and you want to make a note of that frame because later you’ll illustrate it in your report.
You can zoom in or out, and another thing you can do is set a portion of the video you might be interested in. Say you’ve got an hour-long video but you’re interested in a very short portion — 10 seconds or so. You can top and tail it or set what we call a range. You put your playhead where you want your new video clip to start, click on the Start Range button, and you can see what’s highlighted is the portion of the video you’re working on.
Similarly, if you want to select an endpoint, you put your playhead at that point and click End Range. Now you’re only working on that portion — you can still look at the old video, but later when we export, we’ll only export that portion. If you also just want to see that portion on the timeline, click on the Stretch button, and now you’re only viewing that portion. So whichever way you want to work.
All right, let’s go and redact. This is really the topic of this webinar. In the redaction tab, we’ve got a Hide button, a Text button, and an Assisted Redaction button. You need to be in Redact in order to see these buttons. You also have an Options button, which I’ll show you a little later. There’s also a section for audio redaction — I’ll show you later how easy it is to remove sound, but I want to focus on video for now.
The easiest way to redact in Replay, if you want to do it manually because you may not want to use AI at all, is to click on the Hide tool and then draw on your video. In this instance, I’ve just drawn a circular mask by default. Then you can see the strength, which gives you the strength of the redaction. You want to be careful with this — not too aggressive, like this for example, but at the same time enough not to identify that person.
With pixelation, you’re usually safe. But if you change the redaction effect — there’s a drop-down where you can change to Blur or Blacken. If you select Blur, well, blur is a process that can be arithmetically reversed. If you use something like Amped FIVE or even Photoshop and you know what you’re doing, you can potentially de-blur this face and reveal more detail than is actually there.
Of course you don’t want that. The whole purpose of this exercise is to make sure that everything you’re redacting cannot be identified. For that reason, you can also choose a Blacken effect, which will just blacken everything. There’s no way you can reverse black pixels. That’s the safest option, but it’s the less visually pleasing one. When you do forensics, it’s not about how it looks stylistically — it’s more about the safety and security of what you’re doing. You may safely go with Blacken, and that way you also make it clear that you’re redacting that person.
The problem, though, is that there are many faces, and this is just a static video — the camera isn’t even moving. To make the process easier, we’re going to use Assisted Redaction. Click on that button. Very easy to use. On the filter settings, you have what you want to identify and redact.
This feature came out about six months ago, and we’ll improve it and add new features as we go along. We update the software very regularly — three or four times a year. At the moment, the model can identify persons (full bodies), heads (faces, back of heads, profiles, any angles you can film a head — the model has been trained to recognise that kind of detail), vehicles, and license plates.
We’re also working on adding things like tablets and smartphones. Why? Because one of the common things that happens when a police officer interviews a witness is that he or she will make notes of personal details on a smartphone or electronic device or tablet. You may redact the sounds, but visually that information will still be viewable. So we want to make sure we can redact that — it’s one of the things we’re adding.
In this instance, what we want to do is redact everyone but a specific person — this lady here. She’s the suspect in whatever this crime is. We want to redact everyone but her. First, we’ll tell the model to look for heads. All I have to do is untick all the other options and just leave Heads. Then click on Run Assisted Redaction.
This process uses your graphics card to identify and track the persons. It depends on the power of your graphics card — the better the card, the quicker it will be to find that information. You don’t necessarily need a graphics card; you can also just work with a standard issue or chipset graphics card and use your CPU, but it will take much longer. So the faster the graphics card you have, the quicker it will be.
Sometimes it will block for a bit and then continue — don’t panic when that happens. The model might do a quick assessment of the overall video first, and then start looking at the frames, identifying people or whatever it is you’re looking for. When the processing is completed, you haven’t actually done any redaction yet. All you’ve done is identified — in this case — the heads.
The identification is in the form of what we call bounding boxes, and they also have identifiers. This will not be visible on the video where you exported it, but it helps you and the model identify people, persons, or heads — whatever you’re looking at — by a numerical convention. You’ll see later how you can rename labels: suspect, victim, witness, or whatever. By default, this is what happens.
On the left-hand side, you’ve got the Assisted Video Redaction panel. It will give you the thumbnail of all the features that have been located. In this instance, we’re only looking for heads, so they’re all listed here. It also tells you the label and the frames at which each feature was identified. By default, this panel gives you all of the heads detected in the video.
But say you wanted to filter the results just to show the heads at a particular frame — you can click on this button here, “Only list redactions present in the current frame.” Then as you move your playhead, the contents of this panel will change — you’ll only be shown the heads identified in this frame. In this video, there are more or less the same people in every frame, but when you work on other videos where the camera moves from one area to the other, this is a very efficient way of finding what you’re looking for.
Before I do anything — before I redact — I want to tell the application that I do not want to redact a certain person, for example this person here. There are two ways of doing this. Either I click and scroll through the list of findings — the one highlighted will also be highlighted in green on the viewer, so I can quickly scroll through them to find it — or, since I know that the software is calling her Head 244, I can identify her by that convention.
I want to remove that person from my redaction process because she’s a suspect and I don’t want to redact her. All I have to do is tick that box and then click Remove. You also have to take into account that sometimes, especially if the camera moves to a completely different area of the scene where that person is no longer in view, and then he eventually goes back to this table and is in view again — how does the model recognise that’s the same person?
Sometimes the model has been trained to recognise heads and vehicles and license plates so that it understands, even if the camera moves or that person or feature goes out of view, it’s still the same feature. Sometimes it doesn’t — coming back to the fact that AI makes mistakes. Maybe the lighting has changed, and it no longer recognises that person as the same person. In that case, we have a way to manually merge different features together.
We can manually tell the model, “Hey, that person is the same person, even if you’ve identified it as two different heads,” and we can merge them manually. I’ll show you this better in the next sample. In this instance, we’ve only removed that one from the selection. When I’m ready, I just click Apply All.
What that does is apply a Hide filter — which I showed you earlier — to all the bounding boxes found in the video, and it will track them as the boxes are there. Notice how all the faces have been blackened — this is because the last time I used the Hide filter, I used this redaction type. Say I’m not happy with black — okay, maybe if it was just one person it would be fine, but now it’s just a bunch of black balls in my video and it looks kind of messy.
What I can do is click on any of these redactions — it doesn’t matter which one. As soon as you click, on this panel you’ve got the settings for that particular redaction. Say I want to change it to pixelation — I go into the dropdown box and click on Pixelation. Now I can change the strength how I want it. I want to find a happy medium — I don’t want to be able to recognise people. That’s pretty good.
Another thing I want to do is the shape. There’s a dropdown for the shape — you’ve got three types. One is Rectangle, so effectively your bounding box is the shape of the redaction. Or you’ve got Ellipse Inscribed, which puts the circle inside the bounding box. Bearing in mind this black outline is the actual bounding box the model has used to identify the heads — if you use that, you have a slight issue that there might be facial features, vehicle features, or anything identifiable that might be outside the actual shape but inside the bounding box.
That could be a problem, because what’s being redacted is in the shape. For that reason, we also have Circumscribed Ellipse, which draws the ellipse around the corners of the bounding boxes. It will be bigger, but it will give you more certainty that there are no facial features left visible, if there were any to start with. That might be the easiest option.
You’ll be asking, “Do I have to do that for every single redaction?” You can if you want, but we want to save time. What we’re going to do is apply this to all of them. To do that, we just right-click and select Apply Properties to All Hides, and all of them will be redacted. If you then wanted to change one — maybe one person is closer to the camera, so the strength of that redaction may not be applicable for them — you can individually change the strength of only that redaction.
Now comes the very important part. I’ve used Assisted Redaction to do most of the tedious work that I would have otherwise had to do manually. But remember, we have to ensure that nothing has been missed and there are no mistakes — you may even have a feature that’s not a face that’s been detected by the model. This is where the peer review comes in. As a human, you check how the process went and make any corrections.
For example, I can see that if you look at this woman here, as I go across my clip, for some reason there’s a bunch of frames the redactor hasn’t detected. He’s missed a few frames and then has recognised that person again. That can be because of compression, resolution, lighting condition — the detector no longer thinks that’s a head and therefore isn’t selecting it.
What we need to do is — okay, that person has been successfully identified up to a point. I can select that redaction, and all I have to do is click on it. You can see that when it’s clicked, you also have a representation on the timeline. Now that it’s selected, I can make manual changes. I’m going to stretch that selection for the number of frames where that person is visible.
To do that, I just drag the timeline or the playhead up to the point where that person is then picked up again by the redactor — whatever that frame is — and then deselect it to remove it. I see I’ve missed another frame. No problem — just go back a few frames, click that selection again, and stretch it for a bit longer until it joins back to the other one that’s automatically there. I can do that for everything that hasn’t been detected.
Another thing I should tell you — you see there are other persons here that haven’t been detected. Another reason why the detector may not have picked that person up is because the detector has been trained, to a certain extent, up to a certain quality of image at which a person is identifiable. At that distance, that person may not be identifiable in any case because the resolution is just simply not good enough to distinguish one person from another — there’s just not enough data to be identifiable.
But if you want, you can go ahead and redact manually, or you can change the threshold of sensitivity of the redactor. In other words, you’re relaxing the parameters up to which point the redactor thinks that’s a person or a head. It’s not something that can be easily done in Replay for the specific reason that we don’t want non-technical persons to mess with these settings — but we have these settings available to change if needs be.
Once you’ve done the review and you’re happy, you go to Export. You can just export the current image — wherever your playhead is. So you may just be doing an appeal, looking for a specific person. You can do this very quickly in Replay — I’ll show you, it’s a bit off-topic but since we’re here. I can go to the Annotate tab and quickly do — see here, you’ve got a bunch of different annotation types — I can click on Magnify, which is one of my favourite annotation tools in Replay and FIVE, and just do a magnification.
I just draw a magnification rectangle, reposition this blue dot to where this bad person is. I can zoom in a little bit, change the interpolation method to better quality, and resize this rectangle if I need to. If I want, I can change the border type to Shape Only — or my favourite is this one, Point to Zoomed Area. With this, you’ve got magnification and a visual representation of where that magnification comes from.
This is why it’s essential that we now have the Redact tab before Annotate. There’s no risk that whatever else might be in that annotation isn’t redacted. The redaction comes first, and the annotation comes later — so even if you’re magnifying, there’s no risk you might reveal something that before was redacted and is now no longer redacted.
Then what you’ll usually do is go to Export — Export Current Image, or for example, export the video as MP4. This is the option you choose if you want to export your redacted video. This is designed to be a format compatible with every off-the-shelf Windows or Mac computer. It doesn’t require any special player — it’ll play with Windows Media Player or QuickTime if you’re using a Mac.
Imagine how many solicitors use Macs — you don’t have to worry about compatibility. You choose this option, and it’ll be playable in court, playable by every Windows and Mac machine off the shelf. It’s designed for full compatibility. Then you can generate a report — but I’m going to do that for the next sample because it’s much more appropriate.
Now I’m going to drop a clip in here. Just go back to the Play tab, click on this clip, drag and drop. We saw a brief shot of this bad guy, and you might recognise these two people. That crime has been filmed, and what we want to do is protect the identity of the victim — and there’s a witness as well, a very brave member of the public who gives chase to this bad guy.
In this instance, I’m going to do it on the whole video. Contrast looks pretty good already — I don’t need to go into Enhance Light and change the lights. I can, but I’m pretty happy with that. Going into the Redact tab. I want to show you something quickly. Remember earlier when I said that in Replay you can also use non-AI based redaction.
You can animate redactions very quickly without using Assisted Redaction. I’m going to show you why sometimes you might use one method over the other. For example, if I wanted to redact this guy — I don’t really want to, because this is the suspect — but say I was going to redact him, I could choose Hide, choose a circle, change the strength like this. I’ve done this manually.
Now I’m going to use a process called tracking. It’s not using AI — it’s manual. I’ll click on this button called Track, and now you can see this yellow and green outline. The green one is basically telling the program what area to track. For example, you see the top of this person has a very distinctive pattern on the shirt — it’s easily identifiable from a pixel point of view. By “pixel point of view” I literally mean the configuration of where those pixels are and what their luminance and colour values are.
This yellow box is the anticipated direction of movement of where that subject is going. I’m telling the software, “Don’t bother looking for the shirt above or on the right, because that’s going to move to the left in the next frame.” I’m making the process a bit easier by shifting the yellow rectangle towards the anticipated direction of movement.
Notice how what I want to redact is the head, but what I’m tracking is the shirt. Why? Because if I’m not using an AI methodology, if that person’s face — and look at me speaking to you — if I turn my head like this, in pixels that will look considerably different to how it would look if I’m looking ahead at the camera.
Using non-AI, the program isn’t looking for a head — it’s looking for pixels, trying to recognise pixels. For that reason, I opted to look for the suspect’s shirt. Even if I move my head, presumably my chest will still be facing the camera, and therefore that won’t change so much. Now I click the Track button again. For every frame, that shirt is being looked for by the software. It’s looking for pixels — yes, the pixels will look slightly different, but not different enough for the program to think it’s a different object.
It will keep tracking it quite happily until it gets to a point where it’s lost. The reason is that there was a point where the suspect changed his orientation to the camera, the shirt feature is no longer visible, and therefore the software has lost it. That’s why in some instances it’s more effective to use AI — AI has been trained to recognise heads even if the heads change their orientation and angle towards the camera.
But there’s no reason you can’t use this if you’re tracking something that’s good quality enough and trackable. You may even use AI for a bit and then use this for when the AI didn’t work. The choice is yours — I’ve only shown you a different method to do corrections for Assisted Redaction.
Let’s do Assisted Redaction now, because this is really the topic of the webinar. I’m going to delete this and click on Assisted Redaction. Again, I’m looking for heads. This time I’m also going to add labels — there’s an option to add labels, which will just be some text around the redactions. Then click Run Assisted Redaction.
Again, this is doing most of the thinking — the AI model. Don’t panic too much if it doesn’t seem to be going anywhere. At some point it will probably kick off and start. It may stop at some point and then resume. This depends on your graphics card, computer performance, et cetera.
We designed the system to be fairly quick. The way it does it is, it’s not looking at every individual frame, but at frame ranges to see if it can detect a head moving from a start frame to an end frame. Now we’ve got our heads detected. There’s something flashing here — this is what we call a false positive. The redactor has thought, in some frames, that this is a head, and of course it isn’t. This reinforces the point that the Assisted Redactor isn’t perfect — it has its limitations.
Now I’m going to do some refinements. You’ve got this panel, and you can do some preparation work in here. However, if you missed anything, not to worry — after you’ve done redactions, you can still go and edit them on a one-to-one basis. So whichever is most comfortable in your workflow.
I’m looking at all the elements detected in this particular video. There are a couple of instances — actually two or three — where the same person has been detected as different persons. You can clearly see my head is one, two, three times here, or even four times. To check, all you have to do is click on it, and it will reveal the first frame where that head was detected. I can see that’s still me.
What I can do is identify that’s the same person, and I can just select them by clicking the tick box. Once I select them all, I click Merge. That will merge all the frames where those three redactions were visible into one single object. I’m also going to do this with Michelle — she’s been identified as two different people, maybe because her hair… You see, the redactor’s done a good job in identifying her, because with the hair like this, it might not look like a person’s head, and it just thinks that’s a different person to this other one detected at the beginning of the video.
I’ll manually select them by clicking the tick box, and there’s also one of Michelle here at the end — click that one as well — and then click Merge. There’s my witness as well, the brave member of the public who gives chase. There’s another instance of him too. Two different instances, but actually the same person — select them and click Merge.
Then there are a couple of instances of objects that have been misinterpreted. There’s one frame, or one little range of frames, where a bag has been mistaken for a head. No problem — just select them and delete. While I’m at it, instead of deleting one by one, I can select all of the objects that have been mistakenly confused for heads. Select them all — there’s another one here too. If you don’t know what it is, just click on it, and it’ll reveal it on the viewer.
That’s definitely not a head — click on that one too. This one — what is that? That’s again not a head, selected. This one here, again, not a head, selected. Then Remove, and it’ll remove them all at once. I’ve missed a few — no problem, just select them and Remove. There’s one more instance — when I merged myself or three objects, I missed one, so actually four instances. Three have already been merged; all I have to do is merge the additional one. Select them, click Merge.
Now I’m left with a streamlined list of objects I want to redact. I can change the label — for example, this one is the suspect, so I’ll just click on the label text box and type “suspect” with my keyboard. Do the same with Michelle here — she’s the victim. Same with our witness.
Now I want to show you a quick trick — a quick shortcut. I’ve redacted everything: my witness, my victim, and also my suspect. But I don’t want to redact the suspect, because the suspect is the one who needs to be visible in the video. He’s the bad guy, so we definitely want to see him doing his bad things.
Instead of using the tracker to redact, I can use the tracker to annotate. Say I wanted to magnify this person instead of redacting — I can use the tracking data that was intended for the redaction for the annotation. Let me show you. First, I’ll click Apply Tool or Apply All, which will redact everything. Then I’ll click on this object — not the label. I’ll lock this one because I don’t want it, since now there’s two objects.
I’ll delete the text one. Now I’m going to right-click on this one and select Copy Tracking Data. That’s copied all the coordinates for all the frames where that redaction is. Now I’m going to delete this one because I don’t want the redaction. I’ll go to Annotate, click the next tab, and do an annotation like an arrow or a magnification. Probably a magnification because I really like it and it shows the evidence quite well.
I’ll put my dot here just to show my suspect, make it a little bit bigger, make sure his face is visible, and type Point to Zoomed Area, which will plot the magnification to where it comes from on the video. Then I’ll right-click and select Paste Tracking Data. What that’s done is use the tracking data of the Assisted Redactor to animate the annotation.
This is in preparation of our next feature, which will probably be the ability to track and annotate as well. At the moment, you can officially only track and redact, but by using this shortcut, you can do track and magnification or annotation if you want to put an arrow. There are different types of annotations you can use — but the tracking data principle remains the same.
I want to check — for example, here Michelle isn’t being detected, but I probably wouldn’t need to worry, because at this point in time she isn’t identifiable. It’s just a lady with long brown hair. I don’t need to worry about it. If, however