http://www.realworldtech.com/forums/index.cfm?action=detail&id=67226&threadid=66595&roomid=2
This link describes _exactly_ what I think is the future of OS design. Andrew Tanenbaum has said about Minix 3, "a bug in a driver [...] cannot bring down the entire OS." That's a big claim, and these types of claims are what make microkernels sound attractive.
But as Linus has argued so very often, distributed algorithms are not easy, therefore microkernels will never supplant traditional kernels.
But there may be a better way. Get rid of this idea that the only way to protect the kernel is to run untrusted code in a user process. New idea: untrusted code is written in a language that either statically or at-runtime is able to restrict the behavior of said code from doing anything really bad.
When such code is loaded into the kernel, the kernel's "trusty" (no pun intended) compiler combs through it, adding any necessary bounds checking on any pointer dereferences, and bingo. You can make the same guarantee in a traditional-ish kernel as a microkernel. (The only thing non-traditional about this new type of kernel would be the inclusion of said compiler.)
Subscribe to:
Post Comments (Atom)
1 comment:
You should look at Microsoft's Singularity. They do a lot of static analysis in their "trusty" compiler. They even turn off hardware memory protection and manage to do without it. Of course the apps have to be .NET, or something like it, I think (managed code).
Post a Comment