24 Oct, 2010
As a web 2.0 company today five nine’s no longer cuts it wrt uptime. We do not have the luxury of providing 99.999% availability. Users expect 100% uptime. This post is a macro model of things that need to be taken care of to achieve 100% uptime. Inkeeping with the industry’s love for acronyms I call it the CRABS model
You must be aware of the exact capacity that your infrastructure can handle. In terms of requests, number of users, amount of storage, number of transactions, network throughput and so on. This is applicable to every component within the system. Each service has its own capacity limitations. If your architecture comprises of a database, an app server, a queue, a mail server, and a memory cache, each of these components have their own capacity limitations. Capacity also depends on the state of the system, time of the day, user patterns etc. For instance if you are heavily dependant on memory caches, and in your application design there is a possibility that you may start out with a cold cache, then the requests your application can handle during this time will be different from the requests it can handle with a warm cache.
Knowing the capacity of every component in the system allows you to do the following -
* determine the peak load your system can handle
* put limits into place to ensure your system never gets more requests than it can handle
* determine when the system is reaching close to peak capacity and pre-emptively scale the infrastructure to account for growth
Every component must have adequate redundancy in an active-active model. These days a simple n+1 does not cut it out, nor does a standby failover. Most redundant clusters consist of capacity well beyond that required during peak loads. Additionally it is not acceptable, anymore, to require even a few minutes of downtime for a standby to start-up incase of downtime of the primary node. And it is certainly not acceptable to lose any data. Downtime of any node or any component is expected to be completely transparent to end users. This starts becoming difficult when you take into account user sessions, state and data storage. This requires thought at design time. Applications have to be designed ground up to be redundant to an extent where downtime of multiple hardware and software components do not impact the end user in any way. Larger applications take into account geo-redundancy and the possibility of entire datacenters or geographical locations being unavailable for a certain period of time. As many components as possible should run in active-active mode where failure of one of a set does not result in any impact to the end user. Think of every component (hardware and software) in your setup and allow for several of them to fail at the same time. Ensure adequate capacity and data redundancy.
Expect users, hackers, customers, vendors, developers and unrelated 3rd parties to intentionally or unintentionally abuse your system. I divide abuse into the following categories -
- Denial of Service: Someone sending unwarranted requests to your system utilizes the peak capacity of your system resulting in a denial of service to your other users. These can be application requests or network requests. The requests maybe intentional or un-intentional and maybe distributed. The requests may even be legitimate. For instance one may legitimately use your mail system to send out a million emails. Preventing DOS requires identifying all potential scenarios and ensuring none of the services and devices in your infrastructure permit any user or system to send more than a warranted number of requests. Network based DDOS attacks must be mitigated by using special DDOS mitigation equipment that cleans the traffic
- Security breaches: Someone accessing your system with the intention of damaging it by exploiting a vulnerabliity in the network, application, OS etc to gain access and disparage your services. One needs to employ server hardening, firewalls, strict security processes, access policies, intrusion detection systems, following owasp guidelines, ensuring application security and much more to ensure tight security of one’s services.
- Manual booboos: Many a downtime has been a result of an unsuspecting sysad running “rm -fr” or a fatigued developer running a “delete from table” without a where clause. One can prevent these by defining structured processes and policies.
Another frequent cause of downtime or service unavailability is bugs in the software. Heed the following tips to ensure zero defects in a live scenario -
- Adequate automated and manual unit and functional testing of the software
- Dog-fooding and Staggerred release wherein new versions are always released to limited internal and external audiences before releasing them to the entire user base
Careful capacity planning does not prevent getting tech-crunched, slash-dotted or dugg. Your application design must support infinite scalability. This again requires careful planning with respect to application design and hardware selection. Vertical and Horizontal partitioning, clustering, stateless configurations and more help in creating a design that scales linearly by adding additional nodes without requiring any downtime. Always think of millions of users.
4 Jun, 2010
It is no surprise that 6 of the top 10 desktop applications by usage time are browsers (source: Wakoopa). We all have our gripes with a browser as an application container – sandboxing, cross browser compatibility issues, no access to native APIs. The developments over the last few years however have been very promising – Ajax, Flex,HTML5, Web Sockets, Web Hooks, Google gears – with all thats afoot a browser application nowadays provides a near native experience.
One of my many personal peeves has been the lack of raw socket connection capabilities and bi-directional communication from a browser. This too has changed considerably over the years. This article lists various bi-directional communication methods that one can use from a browser -
- Comet: Comet is more a collection of techniques that provide bi-directional communication between a browser and a server. It is a superset of Long-polling, BOSH, and other such techniques
- Long-polling: This merely refers to an HTTP connection that is maintained for a long duration, without disconnection. A server, upon receiving a request, keeps the connection with the client open, and sends streams of data back to the client. The response is never deemed to have completed, hence the server can continue to keep pushing data to a client over this connection, thus emulating push
- BOSH: A BOSH library uses upto 2 connections to a server - one connection for the client to send data to the server, and another for the server to send data to the client. The client opens a first connection and sends a request to the server. The server does not respond, and then subsequently can use this connection to send a response whenever it is ready. If the client meanwhile needs to send data to the server it does so through a request from a second connection. The moment the server receives this request from a second connection, it sends a response out to the first connection thus reversing the roles of the the two connections
- Flash: One can use Flash to establish a socket connection to a server. This is a far more efficient method for bi-directional communication. However it has certain limitations. Firstly flash supports two types of socket connections – XMLSocket and a Raw TCP Socket. So no UDP. Secondly from a Flash widget, one can only make a socket connection to the domain from where the page was loaded. No cross-domain calls are permitted unless an explicit cross domain policy file is provided for by the server you are making a connection to. Therefore one cannot load a flash widget from server1 and make a socket connection to server2. For instance, if one were to write a flash based MSN client, the client would not be able to directly connect to the MSN servers. One solution would be to proxy the connection through a TCP proxy installed on your server. However this would mean that you would need server infrastructure to relay the connection. A nice article describing how to achieve this is available here.
- Web Sockets: Plagiarising from Wikipedia – “WebSockets is a technology providing for bi-directional, full-duplex communications channels, over a TCP socket and is being standardized by the W3C and IETF”. Websockets is still limited in the sense that it is not a protocol-independent binary socket connection. A reference implementation (client and server) is available at http://jwebsocket.org/
- Java Applets: Java applets are way more powerful than flash when it comes to raw socket capability. You have a choice of protocols (UDP/TCP) and most of the java stack at your disposal. Java applets too have a sandboxing restriction, which though is easier to circumvent than Flash. As a Java applet, you can only make a socket connection to the server from where the page was loaded, unless the applet is a signed applet. So one can use a signed applet to setup a socket connection to any server without having to use a proxy server. To my mind this would be the ideal method if only I had the slightest confidence in the applet working as advertised. In the last few years I am yet to see a single Java applet run without error in my browser. Have never bothered to troubleshoot it, but it does not give me confidence
- External application: I now come to the most elegant method of achieving powerful bi-directional access between a browser and any server with complete native capabilities. Infact this method is the raison d’etre for this article. While most of the above methods would work in most scenarios, they still lack the power of a native desktop application (barring the browser plugin). Most of the methods above are sand-boxed, inefficient, require server proxies, and cannot access underlying native OS functionality. This brings me to a far simpler yet superior method – writing a native application that runs on the users machine and exposes a web server (or some socket server) to which the app in the browser can communicate using … you guessed it … any of the above methods (Flash/BOSH/Comet/HTTP). Seemingly Google’s video chat plugin works in this manner. All the cool P2P, UDP, ICE, NAT traversal magic is written as an external application that the user downloads. The data is then streamed from this out-of-process app into the browser and can be played using the Flash player. This method infact reminds me very much of how Rhomobile works on the mobile phone. As a part of my research I also came across numerous other applications that use this technique. Another interesting project worth mentioning is Littleshoot by Adam Fisk. LittleShoot is an opensource implementation of P2P in the browser. It works by downloading an application that runs on your machine as a service, and then when you visit the LittleShoot website the webpage detects that you have the app installed and can use the app (which is a mini-web server with complete OS access) to pretty much do anything.
17 May, 2010
Since my discussion thread on the efficiency of the in-memory data structure of ZeroMQ with Martin Sustrik, I have been reading up a bit by bit on efficient data structures, primarily from the perspective of memory utilization. Data structures that provide constant lookup time with minimal memory utilization can give a significant performance boost since access to CPU cache is considerably faster than access to RAM. This post is a compendium of a few data structures I came across and salient aspects about them
Judy arrays http://judy.sourceforge.net/doc/10minutes.htm
Excerpt: A Judy tree is generally faster than and uses less memory than contemporary forms of trees such as binary (AVL) trees, b-trees, and skip-lists. When used in the “Judy Scalable Hashing” configuration, Judy is generally faster then a hashing method at all populations. A (CPU) cache-line fill is additional time required to do a read reference from RAM when a word is not found in cache. In today’s computers the time for a cache-line fill is in the range of 50..2000 machine instructions. Therefore a cache-line fill should be avoided when fewer than 50 instructions can do the same job. Judy rarely compromises speed/space performance for simplicity (Judy will never be called simple except at the API). Judy is designed to avoid cache-line fills wherever possible. The Achilles heel of a simple digital tree is very poor memory utilization, especially when the N in N-ary (the degree or fanout of each branch) increases. The Judy tree design was able to solve this problem. In fact a Judy tree is more memory-efficient than almost any other competitive structure (including a simple linked list).
HAT-trie – a cache concious trie http://portal.acm.org/citation.cfm?id=1273761
Excerpt: Tries are the fastest tree-based data structures for managing strings in-memory, but are space-intensive. The burst-trie is almost as fast but reduces space by collapsing trie-chains into buckets. This is not however, a cache-conscious approach and can lead to poor performance on current processors. In this paper, we introduce the HAT-trie, a cache-conscious trie-based data structure that is formed by carefully combining existing components. We evaluate performance using several real-world datasets and against other high-performance data structures. We show strong improvements in both time and space; in most cases approaching that of the cache-conscious hash table. Our HAT-trie is shown to be the most efficient trie-based data structure for managing variable-length strings in-memory while maintaining sort order.
Burst Trie http://goanna.cs.rmit.edu.au/~jz/fulltext/acmtois02.pdf
Excerpt: Many applications depend on efficient management of large sets of distinct strings in memory. We propose a new data structure, the burst trie, that has significant advantages over existing options for such applications: it requires no more memory than a binary tree; it is as fast as a trie; and, while not as fast as a hash table, a burst trie maintains the strings in sorted or near-sorted order. These experiments show that the burst trie is particularly effective for the skewed frequency distributions common in text collections, and dramatically outperforms all other data structures for the task of managing strings while maintaining sort order.
Radix trie (aka Patricia trie) http://en.wikipedia.org/wiki/Radix_tree
Excerpt: The radix tree is easiest to understand as a space-optimized trie where each node with only one child is merged with its child. Unlike balanced trees, radix trees permit lookup, insertion, and deletion in O(k) time rather than O(log n)
Ternary Search Trees http://en.wikipedia.org/wiki/Ternary_search_tree
Excerpt: A trie is optimized for speed at the expense of size. The ternary search tree replaces each node of the trie with a modified binary search tree. For sparse tries, this binary tree will be smaller than a trie node. Each binary tree implements a single-character lookup. It has the typical left and right children which are checked if the lookup character is greater or less than the node’s character, respectively. A third child is used if the lookup character is found on that particular node. Unlike the other children, it links to the root of the binary search tree for the next character in the string
Next steps: to trie and setup benchmarks for some of these on a practical application
9 May, 2010
- You can check if a user coming to your website has already visited any of your competitors, and if so target specific offers to them
- If you rank at the 5th position in Google for a keyword you can check if the user has visited any of the previous 4 links
More details available here
6 May, 2010
We need a simple message queue to ensure asynchronous message passing across a bunch of our server side apps. The message volume is not intended to be very high, latency is not an issue, and order is not important, but we do need to guarantee that the message will be received and that there is no potential for failure irrespective of infrastructure downtime.
Dhruv from my team had taken up the task of researching various persistent message queue options and compiling notes on them. This is a compendium of his notes (disclaimer – this is an outline of our experience, there may be inaccuracies) -
- Some reading on clustering http://www.rabbitmq.com/clustering.html
- DNS errors cause the DB(mnesia) to crash
- A RabbitMQ instance won’t scale to LOTS of queues with each queue having fair load since all queues are stored in memory (queue metadata) and also in a clustered setup, each queue’s metadata (but not the queue’’s messages) is replicated on each node. Hence, there is the same amount of overhead due to queues on every node in a cluster
- No ONCE-ONLY semanamntics. Messages may be sent twice by RabbitMQ to the consumer(s)
- Multiple consumers can be configured for a single queue, and they will all get mutually exclusive messages
- Unordered; not FIFO delivery
- Single socket multiple connections. Each socket can have multiple channels and each channel can have multiple consumers
- No provision for ETA
- maybe auto-requeue (based on timeout) — needs investigation
- Only closing connection NACKs a message. Removing the consumer from that channel does NOT. Hence, all queues being listened to on that channel/connetion are closed for the current consumer
- NO EXPONENTIAL BACKOFF for failed consumers. Failed messages are re-tried almost immediately. Hence an error in the consumer logic that crashes the consumer while consuming a particular message may potentially block the whole queue. Hence, the consumer needs to be programmed well — error free. However, apps are like; well apps…
- Consumer has to do rate limiting by not consuming messages too fast (if it wants to); no provision for this in RabbitMQ
- It will use only it’s own DB — you can’t configure mySQL or any such thing
Clustering and Replication:
- A RabbitMQ cluster is just a set of nodes running the RabbitMQ. No master node is involved.
- You need to specify hostname of cluster nodes in a cluster manually on the command line or in a config file.
- Basic load balancing by nodes in a cluster by redirecting requests to other nodes
- A node can be a RAM node or a disk node. RAM nodes keep their state only in memory (with the exception of the persistent contents of durable queues which are still stored safely on disc). Disk nodes keep state in memory and on disk.
- Queue metadata shared across all nodes.
- RabbitMQ brokers tolerate the failure of individual nodes. Nodes can be started and stopped at will
- It is advisable to have at least 1 disk node in a cluster of nodes
- You need to specify which nodes are part of a cluster during node startup. Hence, when A is the first one to start, it will think that it is the only one in the cluster. When B is started it will be told that A is also in the cluster and when C starts, it should be told that BOTH A and B are part of the cluster. This is because if A or B go down, C still knows one of the machines in the cluster. This is only required for RAM nodes, since they don’t persist metadata on disk. So, if C is a memory node and it goes down and comes up, it will have to be manually told which nodes to query for cluster membership (since it itself doesn’t store that state locally).
- Replication needs to be investigated (check addtl resources) however, from initial reading, it seems queue data replication does not exist
- FAQ: “How do you migrate an instance of RabbitMQ to another machine?”. Seems to be a very manual process.
- Any number of queues can be involved in a transaction
- RabbitMQ benchmarks (inconclusive): http://www.sheysrebellion.net/blog/2009/06/
- Some more RabbitMQ benchmarks: http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2009-October/005189.html
- If you are still thirsty: http://www.rabbitmq.com/faq.html
- Supports transactions
- Persistence using a pluggable layer — I believe the default is Apache Derby
- This like the other Java based product is HIGHLY configurable
- Management using JMX and an Eclipse Management Console application - http://www.lahiru.org/2008/08/what-qpid-management-console-can-do.html
- The management console is very feature rich
- Supports message Priorities
- Automatic client failover using configurable connection properties -
- Cluster is nothing but a set of machines have all the queues replicated
- All queue data and metadata is replicated across all nodes that make up a cluster
- All clients need to know in advance which nodes make up the cluster
- Retry logic lies in the client code
- Durable Queues/Subscriptions
- Has bindings in many languages
- For the curious: http://qpid.apache.org/current-architecture.html
- In our tests -
- Speed: Non-persistent mode: 5000 messages/sec (receive rate), Persistent mode: 1100 messages/sec (receive rate) (send rate will be typically a bit more, but when you start off with an empty queue, they are almost the same for most queue implementations). However, the interesting bit is that even in transacted mode, I saw a lot of message loss if I crashed the broker (by crash I mean Ctrl+C, not even the more -9 signal type of thing that I usually do). Why I stress this is that apps. can usually hook on to Ctrl+C and save data before quitting, but qpid didn’t think it prudent to do so. Out of 1265 messages sent (and committed), only 1218 were received by the consumer (before the inflicted crash). Even on restarting the broker and consumer, that didn’t change. We observed similar behaviour with RabbitMQ in our tests. However, RabbitMQ docs. mention that you need to run in TRANSACTED mode (not just durable/persistent) for guaranteed delivery. We haven’t run that test yet.
- HIGHLY configurable. You can probably do anything you want it to with it
- You can choose a message store. 4 are already available
- Has lots of clustering options:
- Shared nothing Master-Slave: ACK sent to client when master stores the message
- Shared Database: Acquires a lock on the DB when any instance tries to access the DB
- Shared Filesystem: Locks a file when accessing the FS. Issues when using NFS with file-locking; or basically any network based file system since file locking is generally buggy in network file systems
- Network of brokers: This is an option that allows a lot of flexibility. However, it seems to be a very problematic/buggy way of doing things since people face a lot of issues with this configuration
- A. Default transport is blocking I/O with a thread per connection. Can be changed to use nio
- Horizontal scaling: Though they mention this, the way to achieve this is by using a network of brokers
- Patitioning: We all know Mr. Partitioning, don’t we. The client decides where to route packets and hence must maintain multiple open connections to different brokers
- Allows producer flow-control!!
- Has issues wrt lost/duplicate messages, but there is an active community that fixes these issues
- Active MQ crashes fairly frequently, at least once per month, and is rather slow - http://stackoverflow.com/questions/957507/lightweight-persistent-message-queue-for-linux
- Seems to have bindings in many languages(just like RabbitMQ)
- Has lots of tools built around it 12. JMS compliant; supports XA transactions: http://activemq.apache.org/how-do-transactions-work.html
- Less performant as compared to RabbitMQ
- We were able to perform some tests on Apache Active MQ today, and here are the results:
- Non persistent mode: 5k messages/sec
- Persistent mode: 22 messages/sec (yes that is correct)
- There are multiple persisters that can be configured with ActiveMQ, so we are planning to run another set of tests with MySQL and file as the persisters. However, the current default (KahaDB) is said to be more scalable (and offers faster recoverability) as compared to the older default(file/AMQ Message Store: http://activemq.apache.org/amq-message-store.html).
- The numbers are fair. Others on the net have observed similar results: http://www.mostly-useless.com/blog/2007/12/27/playing-with-activemq/
- With MySQL, I get a throughput of 8 messages/sec. What is surprising is that it is possible to achieve much better results using MySQL but ActiveMQ uses the table quite unwisely.
- ActiveMQ created the tables as InnoDB instead of MyISAM even though it doesn’t seem to be using any of the InnoDB features.
- I tried changing the tables to MyISAM, but it didn’t help much. The messages table structure has 4 indexes !! Insert takes a lot of time because MySQL needs to update 4 indexes on every insert. That sort of kills performance. However, I don’t know if performance should be affected for small (< 1000) messages in the table. Either ways, this structure won’t scale to millions of messages since everyone will block on this one table.
15 Apr, 2010
At Directi, we have been toying with some ideas around making some of our web apps mobile friendly. I spent sometime reading and reviewing various online guides on mobile website development. Here are a few of the good resources I found -
- http://mobiforge.com/designing/story/effective-design-multiple-screen-sizes – Designing a mobile website for multiple screen sizes
- http://mobiforge.com/designing/story/mobile-web-design-getting-point-part-i - This article investigates salient aspects of Google, Facebook and Twitter’s mobile websites
- http://mobiforge.com/designing/story/mobile-web-design-getting-point-part-ii – This article applies principles from part i towards building an online store
- http://mobithinking.com/best-practices/a-three-step-guide-usability-mobile-web - A Three Step Guide to Usability on the Mobile Web
- http://mobithinking.com/ – Nice articles on stats, marketing advice etc for mobile devices
- http://eng.designerbreak.com/2009/tutorial/create-a-mobile-site/ – A tutorial on creating a mobile website
- http://www.w3.org/TR/mobile-bp/ – W3C guide on Mobile Web Best Practices 1.0
- http://deviceatlas.com/ – the most comprehensive data source on handset detection and handset information – provides APIs and tools
- http://ready.mobi/ – The mobiReady testing tool evaluates mobile-readiness of a website using industry best practices & standards. The free report provides both a score (from 1 to 5) and in-depth analysis of pages to determine how well your site performs on a mobile device
- A Mobile web developers guide
- Oreilly book – Mobile Design and Development: Practical Concepts and Techniques for Creating Mobile Sites and Web Apps
13 Apr, 2010
We are beginning implementation of OAuth in one of our projects. I just finished reading up a ton of resources. In the end I only needed to readup a few. Here they are in the recommended order -
- http://hueniverse.com/oauth/ – The best layman explanation of how OAuth works – strongly recommended resource. Read every section.
- http://oauth.net/ – The official OAuth site, contains the protocol specifications
- http://tools.ietf.org/html/draft-hammer-oauth-10 – The latest spec
- http://oauth.net/code/ – Links to ready OAuth libraries in every language
OAuth is a fairly simple protocol, especially if you are familiar with the basics of HTTP, nonce, basic encryption/digital signatures etc.
14 Mar, 2010
Most of us have heard of the NetFlix million dollar competition (read here, here and here) that lasted 3 years, attracted 51,000 contestants from 186 countries, all competing AND co-operating to build a better recommendation engine for NetFlix so that users of NetFlix can get more accurate movie suggestions. The winners – BellKor’s Pragmatic Chaos – a team from AT&T research took the $1 million prize by providing the winning algorithm. The innovations and ideas generated on this subject during the course of 3 years was a feat unachievable by any single corporate research division.
Crowdsourcing (as coined by Jeff Howe of Wired Magazine) has been gaining considerable traction as a feasible, scalable, practical and even cost-effective method of getting stuff done – whether it is design, development, ideating, problem solving and more. We are not unfamiliar with the concept – everyone who has ever used Wikipedia has used a product of crowdsourcing. Over the last several years, many web applications and portals have emerged that have taken crowd sourcing to the next level by webifying the process and making it accessible to the masses. Taking a page from Auren Hoffman and Joe Kraus’ articles – it has never been a better time to be an entrepreneur. What used to take millions of dollars, swanky offices, expensive 64-way sun solaris boxes, and an elite team, can now be achieved by a single person with a smart idea. Think about it. All you need is a great idea. Dont have programmers? Make your way to TopCoder or Rent-a-coder and hire a just-in-time team. Need to give your brand visibility? Head over to crowdSpring or 99 Designs and get a logo and a look from hundreds of contributors for cheap. Need servers? You can now run on the same scalable infrastructure that Amazon and Google run on. From design and marketing, to development and deployment – you can avail the best of the resources realtime without offices, infrastructure, capital or people. Crowd Sourcing and Cloud Computing will take innovation and starting up to a whole new level.
Enough of a digression though – having spent a better part of my Sunday researching Crowdsourcing – here is a compendium of resources for your benefit -
- Look who’s Crowdsourcing – http://www.wired.com/wired/archive/14.06/look.html
- A collection of >100 successful crowdsourcing examples – http://crowdsourcingexamples.pbworks.com/
- Manual tasks
11 Jan, 2010
Kestrel is a simplistic, high-performant, loosely ordered, reliable queue that twitter uses as the backbone of its messaging infrastructure. I spent sometime today morning studying it and here are my notes -
- Extremely small footprint (<2000 lines of scala code)
- JVM based (written in scala)
- Servers in a Kestrel cluster have no communication amongst one another. Clients simply pick a server at random for gets and puts. This results in a loose ordering of the messages which maybe quite ok for most messaging applications
- There is no replication
- While the queues are maintained entirely in memory, they are written to a journal file to prevent data-loss due to a server shutdown or failure (quite similar to redis)
- Supports a reliable read, where a client can fetch an item from the queue within an “open” and “close” block, and if the client disconnects before sending a “close” the item is re-enqueued
- NIO based using Apache MINA
- Supports item expiration
- Kestrel Home – http://github.com/robey/kestrel
- Kestrel Documentation - http://github.com/robey/kestrel/blob/master/docs/guide.md
31 Dec, 2009
One of the seemingly trivial yet daunting challenges to scaling a datastore is the prevalence of auto-increment IDs to represent unique records in a database. Since any scaling involves horizontal partitioning of data, thus distributing inserts, how does one ensure uniqueness with respect to generation of IDs on these independent machines. One replacement method is using GUIDs (or UUIDs) which is nothing more than a randomly generated 128 bit number (there are various methods of generating one).
The uniqueness guarantee comes from the extremely low probability of two randomly generated 128 bit numbers ever colliding. Just to give you a sense of the size of the space - If a computer was to generate a new GUID every milli second, it would take 10790283070806014188970529154.99 years to generate all GUIDs. That is roughly 83 million billion times the estimated age of the universe.
Is a GUID truly unique
Mathematically speaking no. Since the GUID space consists of 2^128 possibilities, one cannot generate 2^128+1 unique GUIDs. Since the time taken to generate all combinations is so high the probability of a potential collision however within an application space is quite low. However this probability increases as the space of generated GUIDs increase, due to the birthday paradox. As per the birthday paradox in a sample set of n values from a total space of s, the probability that there is atleast one collision is given by the formula – P = (s! / s^n * (s-n)!).
Applying this formula to a space of 2^128 values, the probability of atleast one collision becomes non-trivial when the number of values reach about 10^17 to 10^18 (about 0.001% to 0.14%).
Advantages of using GUIDs
- Globally unique without central generation. Allows easier partitioning of data without having to rely on a central auto-incrementer
- GUIDs are obfuscated and cannot be guessed. Auto-increment IDs have a disadvantage in that one can guess subsequent IDs given a starting point. This allows attacks such as data-scraping and potentially even DOS attacks by simply querying a service for incrementing IDs from a starting point
- Can be generated by the middle tier as opposed to the data layer
Disadvantages of using GUIDs
- Take additional processing power to generate
- Do not index as easily as smaller int values, thus increasing time taken for standard CRUD operations
- Take up additional space (4 times as much – 16 bytes versus 4)
- Can result in data and index fragmentation if a proper indexing mechanism is not chosen
- Can be un-intutive