This is part 2 – see part 1 here…
In the radio programme – The Uncontrollable Algorithm, one of the points that struck me as very alarming and against the norm of software development is that of not knowing how the algorithm will behave.
At first glance, this looks rational. That’s why its an ‘intelligent’ entity – it’s able to think for itself. However, where does that end? If you study SABSA (and I have a Foundation Certification in it) one thing you realise – and is drummed into you – that traceability and verification from the top down principles and concepts through to the logical and then physical architecture is vital. It is equally important to be able to trace from bottom to top too.
Bringing this back to security – if the outcomes of the algorithm are difficult to predict, then surely this is a very nice vector to exploit. In particular, if you could alter the way the system operates to benefit your cause – then you would!
Given the increasing proliferation of AI as a decision making tool (ie more than a decision support tool) this has to be a proportionally increasing concern.
This leads me to think even more about through-life security, but focusing on two areas:
- Code security – how important it is to be as sure as you can be that the code is ‘secure’ (quotes because there are no absolutes in security) DevSecOps practises are the best way to do this. Coupled with a sufficiently demanding bug-bar this will likely lead to good-enough security in most situations.
- Configuration security – that in addition to the sound access control and other basic hygiene factors, the use of File Integrity Monitoring software is non-negotiable. I still meet folks that believe this to be ‘scary’ or that it will get in the way of operational support teams. While I understand the concern – in today’s world: it really doesn’t stack-up as a reason not to use it.