Tuesday, January 30, 2007

Imbalanced (IMBA) Corporate Security.

It seems that both corporate networks and their physical installations may be compromised because of some irresponsible security officer.

When we are talking about security (at a corporate level) we imagine an area (or multiple areas) where only certain people are allowed in. And once they are in, they are divided into groups depending on what they are meant to do in that area. That area may be a physical location (office/building). In there, only the company employes are granted entry priviledges. Once they are in, each one works in his own cubicle and only senior employes wonder around checking everybody else. This is the same when it comes to an electronic network: you have different areas (subnet A, subnet B, etc) and different kinds of permissions (server 1 access, server 2 access, etc).

Security officers are mostly concerned about letting people in. When a new guy is hired they screen him and watch him for a while before granting him appropriate permissions. The problem is that administrators are selfish creatures. If they say you are "OK", that's it. They never check up on you or re-evaluate their decision.

This is bad enough but the problem starts when they forget about you even when you leave the company for ever. As a result, active accounts of ex-employees remain in the system allowing them access at any time. This is huge! It only takes an unhappy ex-employee with the appropriate privileges and maybe a little hacking to enable stealing or destroying information or damaging the infrastructure itself.

It's been over a year since Bob left the company his was working at. A few days ago he realized that his network access had NOT been revoked (and - I bet- neither had his physical privileges, alarm codes etc). He was able to remotely access specific systems from the company network and gain administrative privileges. He could install backdoors in those systems to ensure future access. He could use those systems to attack others, sniff the inside of the company's network (firewalls are of no use in this case) and basically do a lot of nasty things. Also he could take advantage of small security vulnerabilities he had knowledge of (like the fact that they used the same local admin password on every PC) to cover his tracks and hide his identity. Taking it a little bit further and under the assumption that nobody bothered to cancel his alarm code (they took his key through), it would be possible to invade the premises during the night, disarm the alarm using his code (or any other ex-colleague code his knows - this is another big issue) and steal/damage anything he wanted.

As Bob told me, it took 3 months since the day he started working there to get a key for the front door and remote access privileges. Apparently the security officer wanted to make sure he was not some malicious person. What worries me thought is that he took all that time to verify Bob (while making his life harder since he was an employ and did not have sufficient means/privileges to do his job) but still, 12 months since his last day at work, Bob's clearance hasn't been revoked. It's safe to assume that this is not an one-time event. Unfortunately it is my belief that there a lot of "orphan" accounts in the system.

This rises a couple more risks. Let's assume that Jane is also an ex-employee but isn't as cunning as Bob. She has never thought of doing any of the stuff I've just talked about. Jane's account is still active though and is protected by a very easy password. When the sysadm tries to enforce a new (better?) password policy he will not look after Jane's account because Jane is not working there any more. Right? Wrong! Maybe all 999 employees have updated their accounts with hard-to-guess, complex passwords. It takes a single account, Jane's, with a dumb password like "janedoe47" for an attacker to infiltrate the network.

Loose privileges are a liability!

To sum up, securing a physical area or a network means analyzing every possible scenario and providing general cover and failsafes and not just focusing on the "front door". Because that's where an attacker will try to gain access. He will hit weak abandoned accounts with weak passwords, forgotten remote privileges and protocols that should have been revoked.

Sunday, January 28, 2007

Dogbert's Password Recovery Service for Morons.

Check it out here and here :)

Putting jokes aside, some "secure" services work that way :P

Wednesday, January 17, 2007

When Google turns against you...

Google began as a research project in January, 1996 by two students in Stanford University, California. Larry Page and Sergey Brin (the two students) believed they had a better idea for searching the Internet than the existing ones which ranked web sites based on the number of times a search term appeared on each page (so the more keywords you had in your home page, the higher rank you got - obviously inaccurate and exploitable). The Google Search Engine actually analyzed the relationships between websites (how and how many sites linked to another site). It went public in 1997.

Today, the year is 2007. Almost, ten years have passed and Google is estimated to hold 70-78% of the Internet market. It has become a synonym for web searches and is even, unofficially, a verb ("Google 'this' to know more"). It's one, big, central point (hence "point of failure") which searches and indexes the entire World Wide Web. Some call it the "front page of the Internet". People would kill for their sites to appear on the very first results for a keyword. And on the other hand, "if you site is unreachable by Google then it doesn't exist". This really changed the Internet in the late decade but what about now? There truth is that various problems are coming up.

Google can be fooled or (worse) manipulated.

A couple of days ago I read an interesting article on Javalobby which talked about how their forums got spammed and how the Google flagged them and removed them from its results. The guy who posted the story on Slashdot titled it "When Your Site Ceases to Exist". But, let's take things from the top. Javalobby maintains a forum with parts of it where unregistered (and anonymous?) users may post. That's a BIG mistake. It's also very naive (I'll expand on this another time). As a result, spammers exploited that and filled them with about 50.000 messages advertising pills, porn and gambling. Very soon Google's filters picked that up and considered their site yet another "bad apple", burying it down into the ground. How did that affect the site? According to the article's author, they lost about 10.000 visits a day which came from search results references. That's a tragedy for someone who has invested time and money so that users can find his site when looking for certain things. Of course he didn't have a contract or any other agreement with Google. He just trusted it to do the job right.

Yesterday, I read another interesting story on GNUCITIZEN. According to this, their site was down a couple of days due to technical difficulties, displaying an automated error page (Wordpress default error). It's the same error page any site would show if using the same software they did. So the author discovered that Google correlated his site with others showing the same error because it thought they displayed the same content. Technically they did. It was the same HTML page so, based on absolute deterministic logic, they should belong in the same group. The problem is they don't and, worse, even after the problem was fixed and the normal home page was back on, Google kept grouping the site with irrelevant ones. The author goes even further thinking this from a security point of view. He says that if an attacker sets up a couple of web sites (with minimal cost) displaying the same error page from above, plus some Pay per Click advertisements, Google will "work" for him, grouping them with other legitimate sites, which may hold a very high rank in certain keywords. So if an average user types those keywords he'll a list of search results containing the legitimate sites, followed by the attacker's. As a result the attacker with little time and cost will have managed to steal those keywords in his favor and have malicious content showing up in the first page of search results. Imagine porn advertisements in the same results page as the link to "www.ibm.com" when someone searches for "IBM". And Google will have been his accomplice.

So... what do we have here? It has been clearly demonstrated that Google is a single point (of failure) which can be fooled and manipulated. Nowadays if Google can't find you, you don't really exist. Does it have to be that way?

The answer is NO and it comes from the world of peer-to-peer (P2P) systems.

A lot of people know and use digg. It's a website with no actual content but which allows any user to post a link about another website and a short description. Then, other users who find that link interesting, place a positive vote on that post (not on the user). If they don't, they place a negative vote. And when some third user visits digg, he is presented with a portal containing the latest of everything on the Internet (and links to them). There's a threshold (can be customized per user) on what you see. You get posts that collected a large number of positive votes and miss all the others. You can also vote positively or negatively on one or more posts, shaping that way its ranking. The result is that you read stuff that's interesting for most people and skip the rest.

And what about del.icio.us? It's a website with no content of its own (like digg) where users keep records of their personal bookmarks. It works this way: you come across a site you want to bookmark, you place it on del.icio.us and tag it (that is characterize it using keywords). Then, you or everybody can select a tag and find all listings under it. For example you find a funny link about computer games. You post it and tag it using the keywords "funny", "computer" and "games". After a month or so you may select "funny" to see all links that you considered to be funny and so can everybody else.

Isn't that searching the Internet? The difference is that it is users who do the job and not some stupid bot. This isn't P2P on a system level but on a human one. It's not that there are a lot of computers searching and exchanging results and listings but it's humans who do that. If I see something worth looking to, I'll post it so that my (Internet) friends can see it. Then, each of them will promote it (by voting it or tagging it) so that his (Internet) friends can see it and so on. Nobody waits a single source for information.

In both cases from above crowdsourcing is applied. That is both digg and del.icio.us are empty vessels, filled with user activity from all over the globe. Pages are commented and ranked using democratic votes. If one posts something bad, he gets all negative votes and it's buried under the good posts. If one makes a bad comment on a good post, the comment itself is buried by negative votes on the comment and not on the post. It's all very amazing.

OK it's not like "I want to know about Company ABC" but it's not like YET. Imagine a similar site as big as Google where people registered and tagged everything. Then there would be an "ABC" tag and not just that. There would be comments on the company, it's products, it's site, etc. How about that?

Isn't that a couple of generations ahead from looking at a huge list with nothing but a couple of URLs (Google Style)?

Now that we are talking about it there is another, more obvious, P2P model for searching. A distributed Search Engine. Each user's browser contains a small "agent" which looks at a small piece of the Internet, indexing sites. Then, it communicates with it's (network) neighbors (browsers) informing them of its findings. They do the same thing. So when the user (human) types something in that search engine (the interface of which resides in his computer), it knows where to find it or, a least, who to ask. Then it gets multiple results and even mirrors of the source along with ranks and tags like the ones I've just talked about.

When you are looking for something, wouldn't you prefer an option some guy you trust has recommended? Well, that's what I am talking about! I'm talking about the future.

No more "Google can't find me, therefore I don't exist". We won't have to care about that. If you exist you will be found. Or maybe the saying will be shaped into this: "I'm not interesting enough, therefore I don't exist". Haha we are talking about the ultimate democracy. Which government or agency will be able to suppress such system?

When you are trying to censor or manipulate content that the entire planet reviews and comments on, the battle is lost before it even begins.

Of course there a lot of security issues here. Will someone be able to poison such system by deploying thousands or millions of user-imitating bots? Will someone be able to run a DDOS attack by manipulating the system (Slashdot Effect or Digg Effect)? Will phenomenons like "psychology of the crowd" or "gossiping" take over? These (and a lot more) are factors we should really take into account but decentralizing information databases and adopting a more open model when it comes to content management and distribution is certainly the way of the future. I'll get back to the security portion of this some time.

To sum up, the Google algorithm in its current form may not able to handle well the Internet of today and certainly won't be able to do so in the future. Like two students from California revised in 1997 the way we searched the Internet, maybe it's time again to make the next step forward into something as radical and advanced as Google was for that time.