We all know by now that the CIA has suffered from the worst hack in history, worse than the Edward Snowden, Daniel Ellsberg, the OPM and any commercial hack you can think of. The likely response will be lots of time and money spent doing more of the useless stuff that failed last time.
There is a solution. It's proven, at scale and in practice. It's based on the best technology, Machine Learning. It is the reason why banks lose only little dribbles of money to evil criminals and hackers, but have never suffered substantial card losses, after decades of thousands of attempts per year! You think maybe the CIA should try it?
Security at the CIA
I have no idea how the CIA does security today. I suspect the bosses all say it's incredibly important, they spend lots of money on it and use "the best" methods.
The CIA may spend more money and use stronger words, but they're pretty much like most large organizations. They just don't know how to get security done and don't seem to care. What about government-issued security regulations? Ineffective.

There are solutions, but no one is interested. Retails stores have implemented them. Even local libraries have implemented them.
Credit Cards
Let's drill into credit card technology, which has implemented a method of security that is proven at large scale and now uses ever-improving machine learning methods. If the CIA had used credit card security technology to protect its assets, the recent breach would not have happened.
There are hundreds of millions of active credit cards in the US, which cause the movement of trillions of dollars. There are over 30 billion transactions a year. Think of all those cards being used by tens of millions of people at millions of merchants. The fraud problem must be huge, right?
All I can say is, it's a darn good thing the CIA isn't offering credit cards. While the card system isn't perfect, losses due to fraud are well under 10 cents per $100.
First step Card security technology
The card issuers were always concerned about fraud, and as time went on and the numbers grew, the technology deployed to avoid loss evolved.
Among the earliest automated steps to avoid loss in credit cards were two simple measures: credit limits and velocity checks.
Credit limits are what they sound like: how much money you can spend (take) before the card is blocked, i.e., prevents further spending. The limit may be $500 or it may be $50,000, but there's always a limit.
Velocity checks count the frequency that a card is used. If a card is used once a day, that's pretty low frequency. If it's used twice in ten minutes, that's high frequency, and will probably block the card.
These simple measures did an amazing amount to limit fraud losses. They kept losses under 20 cents per $100 spent.
Applying first step card technology to the CIA
In computer terms, all of the assets lost by the CIA are files. Files are managed in a file system, which is a key part of an operating system. All operating systems have security of some kind, which I would hope the CIA already uses. This simply means that any user has to log in and provide security credentials before they're given access to anything.
Every operating system and file system has the ability to restrict access to files by directory. Did the CIA use this simple security measure? I don't know. Did they place files in a hierarchy of directories and carefully control who had access to which tiny subset of all the files they were working on? I don't know, but if they failed to use this existing security method, someone should go to jail.
Regardless of how existing security methods were deployed, the methods of credit limit and velocity check could be added to the file system to keep any one person from "spending too much," i.e., from accessing too many files in a way that could lead to a security breach. Obviously this simple method is not currently deployed at the CIA.
Inside the file system, after a request for a file has been made and before it is honored, there is already a security check: does this person have access to files in this directory? What's needed is straightforward. There needs to be a per-person log of all file accesses, including name and date/time of access. For each person, there should be a limit to the number of files accessed per period of time (for example per day), which is like a credit limit. In addition, there should be a velocity check, so that if a person uses half their daily quota in ten minutes, there could be a problem. When either of these checks are tripped, the user is locked out, and security people need to check and make sure the person is actually doing authorized work. If the limits are set correctly for each person's role, this will stop many problems.
Obviously, bad guys figure this out, so you add things. If you see that someone keeps accessing new files close to the daily maximum, something is wrong. You apply a different kind of velocity check, and shut them down. There are other simple extensions which cover a wide variety of bad things, and shut people down before they get many files.
This method, if applied to the CIA, would put them decades behind credit card technology, but way ahead of where they are. This method would have prevented the recent CIA breach.
Neural Network card technology
Credit limits, velocity checks and their extensions were and are effective for reducing loss. But the bad guys got smarter, and managed to game the system in spite of them.
Before long, some smart and motivated people got together and applied an older machine learning technique, neural networks, to the problem. HNC, now part of FICO, became the industry standard for fraud prevention.
The approach is basically to investigate in depth fraudsters who had been caught and train neural nets to recognize them. The neural nets worked best with a large collection of cases to work on, so an industry coalition was organized, and most of the card issuers contributed their data to HNC, which distributed well-trained models to all the participants. This resulted is tremendous improvement -- it cut the already-low fraud rate in half.
However, this particular method of machine learning, while it works well with credit cards, does not apply to cases like the CIA. Neural nets require a large number of cases to train against, which exist in the credit card world. But fortunately, the number of breaches at places like the CIA, while catastrophic, are small in number and infrequent. Therefore, the neural network method would not help the CIA.
The lastest Machine Learning card technology
Recently, some smart innovators applied more modern machine learning techniques to credit card fraud, techniques that went deeper into the card issuer's computer systems. This company, Feedzai, is posting tremendous gains in fighting fraud, without requiring data sets of existing fraud cases to train against. This new, powerful method trains against normal, lawful users' ordinary usage patterns, and detect departures from them. Because of this, it can catch first-time fraud.
Disclosure: I'm part of a VC firm that invests in Feedzai.
Applying credit card machine learning to the CIA
Feedzai-type machine learning methods are exactly what's needed at the CIA. I'm not saying they can be plugged as-is into the CIA to solve the problem. I am saying that the machine learning methods used by Feedzai to fight credit card fraud are exactly applicable to extend the "credit limit" and "velocity checks" to new levels of accuracy in protecting files in a file system.
Part of what Feedzai does is model the patterns of card usage that are typical for a kind of consumer. Apply this to the kinds of users who access files at the CIA. Some of them are designers, some are programmers, others are quality people and testers. Each of them will have a typical pattern, accessing a per-person unique set of files. Usually, they'll access the same set of files over and over, as they're working on them. Accessing a file for the first time will be rare, and usually only happen when they're working on a new project.
What if a tester, for example, suddenly starts reading files from a project they're not currently working on? What if they're reading source code files, while their normal work only involves working with executables? Alarm!! Ahh-OOOgahhh!! Shut down access of the VERY FIRST file they try to access! And that person has some 'splaining to do.
It's legitimate for a team to be assigned to a new project. Why shouldn't their actions be hand-monitored during the initial days, with every single file access logged and double-checked as being appropriate during a start-up period? Then, once a pattern has been established, the automated method can again take over.
Even a simple implementation of this kind of machine learning would have prevented the massive CIA breach. The 8,761 documents currently available on Wikileaks are just a portion of the whole. Even if the directory-level file system security were as pathetic as I suspect it was, the machine learning method would have caught anyone accessing files outside their normal work pattern, subjecting them to scrutiny and probable removal from the premises.
Conclusion
Security breaches of the most damaging kinds keep rolling in, from the most deeply secret parts of government (NSA, CIA, Army) to important commercial organizations. It's way past time that proven methods of the kind I describe here and in posts linked-to here are deployed to solve the problem.