The #1 Problem With AI and ML ... Accountability
The Department of Homeland Security is developing a software algorithm with the power to flag you as a terrorist. DHS plans to give this software for free to foreign airports around the world so they can find out who the bad guys are. As I read the attached article I thought wow, this is going to be a mess, so many false positives, especially initially, and how are they going to connect the systems to share possible intelligence on travelers? There had better be multiple data points and not just physical and residential factors that determine who will be flagged as a terrorist. And if they get it wrong, who do I complain to? Who’s accountable to the citizen? Who do I sue? :-)
DHS? The foreign government using the software? The software developer? The machine? EVERYONE!
Many questions need to be answered but it doesn’t appear to be enough to get people to answer them before the creation of these types of algorithms. So we could be creating a humanitarian crisis with no resolutions.
Another question is why would DHS spend so much time and money trying to find what could be considered a needle in a haystack. Terrorism today is mostly lone actors or small groups which are even harder to find. In the alternative, I guess we have to start somewhere and if a narrow use case can be created and tested we can make progress towards the end goal which is gained efficiency in using AI and ML technology.