Thursday, 8 August 2013

Wikipedia loads

A bit off-topic for today. I wrote an app to follow Wikipedia loads in Kotlin and MongoDB, it was quite an interesting experience and I learned a lot from the experiment, it would be enough for some post, but what I really wanted to show you is interesting from other perspective: it is a visualization of some facts about internet users, languages and cultures.

On each chart, the black line represents traffic in request/hour and the red line is rendered from the averages of the traffic in that hour in several days.

English: around the clock

Usage graph of the english wikipedia

The English Wikipedia is a wiki built from over 6-million articles maintained by a very big and very active community. What is interesting for me in the English language is that the sun never sets on it. English is the official language of the United States with more than 300 million, Canada with 20 million, Australia with 21 million and United Kingdom with 60 million native english speakers. Also official language in India and smaller Asian countries and several African countries.
This gives that intereresting shape to the curve with several smaller peaks.
  • the big peak is at 18:00 UTC with roughly 14 million request/hour
  • the second peak is at 2:00 UTC with 12 million request/hour. t
  • the load never seems to go lower than 8 million request/hour (even that is huge) at around 7:00 UTC
  • the top load that I have seen is about 18 million request/hour

German: day use

 

German Wikipedia usage

Let's see my favorite industrial nation. Unlike English, German language is almost only spoken in Europe. This may be the reason why we see bigger ups and downs in the curve, the top of the average load is 2.1 million request/hour, but it is also changing day by day, the top activity you see is 4 million request/hour. That is a huge activity from the 120 million native German-speakers.

Hebrew: Sunday wiki

Hebrew Wikipedia usage

I chose hebrew from the small languages. While it is spoken by very small minorities in so many countries, it is only official and majority language in Israel. These folks have a very strange habbit: Friday is not a working day, but they work on Sunday. Saturday is the most sacred thing for religious Jewish people and they do not work.
Actually the low traffic that you see on the chart is not a Saturday. Saturdays are totally average days on the Hebrew Wikipedia and the top day is Sunday. Sunday is always over average.

Hungarian: The Two Towers

Hungarian Wikipedia usage

The other small language that I chose is Hungarian, my native language. (Did you notice my grammar mistakes?) The interesting thing in this curve is the two peaks of lunchtime and dinner (19:00 GMT, which is 6 PM in Hungary). I can't explain. Most people spend a little time checking mail, googling some stuff and reading Wikipedia at dinner? Anyway, usage after dinner falls dramatically.

Russian: Siberia

 

Russian Wikipedia usage graph
The last example is from Russian, I wanted to see a language which is spoken in 10 different timezones all across Europe and Asia. It does not show, very likely because of the population distribution of Russia, most Russians live in the European parts of Russia, while Siberia is almost uninhabited. Nice rivers, forests, mountains.

That's it for today, thanks for reading! I took the very last picture over the beautiful Siberia, I hope one day I will have a chance to see it from close. I mean without having to build a railway :-)



Thursday, 4 July 2013

String.split versus Pattern.split

I believe in java most people use String.split() to split a String to pieces. That method is there for ages (java 1.4), everyone knows it and it just works.
The alternative for this is to use a Pattern instance, which is immutable, therefore you only need a single instance of the pattern created only once and it can serve your applications forever. The guys who started to use it are in my opinion smarter, because they knew that the String.split actually needs to create a Pattern object, which is a relatively heavy operation and they can save it.

However, this is not the end of the story. It would be too short and would not make a blog post :-)

String.split() is smart, and it has a special case when the pattern is only one or two characters long and it does not contain special regexp characters. In this case it will not create a Pattern object, but simply process the whole thing in place. That special piece of code is not just accidentally there.

Let's see the speed comparison.
As you see, String.split performs better when the character you use for splitting meets the above requirements. When you need to split with several different characters - I believe this might be the less frequent case - you'd be much better using a Pattern constant.

Sunday, 31 March 2013

compression again - system vs java implementation

Last time I mentioned that using the operating system's compression utility (gzip on linux) performs better than the java counterpart even if you use it from java (the good old Process.exec()). This is not quite that simple of course :-) So in this test I compare the time needed to compress a random input both by system and java implementations. The size of the input is growing over the test, so does the time needed to compress, but there is something interesting.

So as you see the system gzip is faster, but it has a communication and process creation overhead. The java implementation is running without this overhead is therefore performing better with small inputs. The lines meet at about 512 KB. If the input is bigger, piping through a system command performs better.

This test was performed on Fedora 18 (the disasterous OS) x64 architecture, other architectures and operating systems may have different result.

Monday, 18 March 2013

To kill a GC

This is actually an answer for a recent patch written at work. So what happens if you have an object, which overrides the finallize method and in that it needs to wait for a while, e.g. to wait for a thread to join (database transaction to finish, IO operation, and so on).

This is an example code, it does not wait for a thread for 20 seconds, it only sleeps for 1, but anyway it gives a little work for the GC.

import java.util.Date;

public class Main {

 final int nr;
 final byte[] justsomememory;

 public Main(int nr) {
  this.nr = nr;
  justsomememory = new byte[1024 * 1024];
 }

 public static void main(String[] args) throws Exception {
  for (int i = 0; i < 20000; i++) {
   Main main = new Main(i);
    Thread.sleep(10);
  }
 }

 @Override
 protected void finalize() throws Throwable {
  Thread.sleep(1000);
 }

}


Let's use the jconsole to look under the hood of the VM.

This is a healthy memory usage.
This is what you see if you have any blocking operations in finaly()
I think this is visual enough. So in general, I agree with those who try to avoid having a finallize method in their code. And really why should you have one?

Monday, 4 February 2013

Caching UUID's?

This is a short test inspired by my day work. The question is: is it worth caching objects in memory that could be just parsed? Examples for such objects are Integers, UUID's and some other objects. Well, as you know the first some Integers are actually cached by the runtime, so if you call Integer.parseInt, you may already get cached instances. That makes sense with Integer. With UUID the situation is a bit different, since there are no "first X" UUID's. So let's say that if you use some guid's frequently. The question is: can you get any advantage out of caching UUID objects rather than parsing from string?

Test method

All data series are measured with a 10 different datasets. The datasets differ from each other in the number of repeating UUID's: The first one does not have any, the last one has 90 percent repeating UUID's.

So most importantly, let's measure just plain parsing (no cache) just to compare to something. Then, let's measure caching with a HashMap. I have to add that a HashMap is not a cache and whenever I see a HashMap used as cache, I have terrible nightmares about OOM exceptions coming like zombies from everywhere.
Third, let's measure the performace of a real cache, I chose ehcache. You can choose your own pet cache technology (to be honest I use infinispan, but now for simplicity I just wanted a good old ehcache)

Results

Ok, let's see what we got.
  • As expected, no cache performs more or less the same everytime.
  • HashMap "caching" adds a little speed over 50 percent repeating input. It is a little compensatin for the OOM's you will get, or for the code you will write to avoid it :)
  • Ehcache implementation has some difficulties keeping up, it only beats the "no cache" solution when the percentage of repeating uuid's is over 90%, even then, the gain is little.

So my conclusion is: I would probably not want to cache the objects that are this easy to create. I would definetly try to optimize once the database interactions are optimal, the app scales well to multiple processors and even multiple nodes, and so on... but this looks like a small and painful victory.