Search This Blog

Wednesday, 20 February 2013

Google engineer reworks the Linux DIO



Kent Overstreet, a Google software engineer working on the Linux operating system for the past ten years, has reworked the kernel's DIO (Direct I/O) code so that it's vastly simpler while also being faster for some test runs done in the last week.
On February 11 was the original work in progress patch to improve the DIO code in Linux. As Overstreet wrote then, "The end result is vastly simpler-- direct-io.c is now less than 700 lines of code, vs. the more than 1300 previously. dio_submit is almost gone. I'm now down to four things left in it. It relies heavily on my block layer patches for efficient bio splitting, and making generic make request() take arbitrary size bios."
"It also gets rid of the various differences between async and sync requests. Previously, for async reads it marked pages dirty before submitting the io (in process context), then on completion punts to worqueue to redirty the pages if any need to be. This now happens for sync reads as well," he added.
Not only does it yield a net reduction in the number of lines of code for the Linux DIO code, but it's also yielding performance improvements with the most recent patch.
Overstreet published on Wednesday-- "Got it working again and ran some benchmarks. On a high end SSD, doing 4k random reads with fio I got around a 30 percent increase in throughput. The decrease in compiled binary size is even more dramatic than the reduction in lines of code."
"It's only been lightly tested - I haven't run xfstests yet - but there shouldn't be anything broken excluding btrfs. There's a few more performance optimizations I may do, but aside from the btrfs issues, I think it's essentially done. Due to the sheer number of hairy corner cases in the dio code, I'd really like to get as much review as possible. The new code should be vastly easier to review and understand," he added.
This Linux I/O improvement that's leaner yet higher-performing sounds exciting but hasn't been reviewed extensively yet by Linux kernel developers. If everything pans out, hopefully this work will be merged into a future Linux kernel release in the near-term.
In other linux kernel news
Aside from much slower speed at writing on paper, another significant drawback to traditional pens when compared to typing on a computer is the lack of correction utilities.
A paper notebook that you write on with a pen or pencil doesn’t feature software that let you know when you have spelled something wrong, or when your grammar doesn't sound right. It also won't tell you when your writing becomes almost impossible to read to most humans.
But now, European startup firm Lernstift is looking to bring those correction utilities to pen-and-paper-- literally.
The company’s new pen of the same name will actually vibrate to alert you when your handwriting has become illegible or when you’ve made a grammatical error.
To be sure, the new pen doesn’t sport a fancy display, but instead uses different combinations of modes and vibrations in order to provide you with specific corrections.
When the pen is switched to Calligraphy Mode, it will vibrate once whenever it detects an illegible letter. In Orthography Mode, the pen will vibrate once when you make a spelling error, and vibrate twice when it detects a grammatical error.
The detection mechanisms work in the air as well, so you don’t have to actually put pen to paper for the correction features to function.
The Lernstift pen runs on the Linux operating system and employs the use of motion sensors in order to detect what you’re writing. It uses the data obtained from the motion sensors to look for possible errors.
Currently, two models of the pen are in production. The first model can be considered the standard version, providing the correction capabilities, but not much else.
A more complex model with more features works with your WiFi connection, as well as a pressure sensor to let you know when you’re pressing too hard on the paper.
If you think you’ll be emotionally stable after a pen tells you that it can write better than you, the standard model is slated for an August 2013 release, while the more complex model is planned for a release in 2014. Pricing isn't available yet.
In other Linux news
According to various rumors seen on the blogosphere earlier this morning, Microsoft looks like it could be taking a meaningful look at releasing a full Linux port of its Office Suite sometime next year.
The sudden change of mind is apparently due to Linux showing commercial viability, and because Microsoft is reportedly already working on Office for Android.
Android, as you probably already are aware of, is a Linux-based operating system, meaning a lot of the porting work will have already been done.
It shouldn’t take too much effort to take the next step and bring Office to Red Hat, CentOs, Ubuntu or any other Linux flavor for that matter.
Until today, Microsoft has never released a part of desktop software for Linux, with the exception of Skype, but that was an acquisition, so it changes everything.
But Microsoft does have a Linux department which has mainly been tasked with maintaining Hyper-V (virtualization) compatibility with Linux operating systems under Windows, and Windows Server 2008 and especially Windows Server 2012 are prime examples.
Presumably, and with the development of Office for Android, Microsoft has beefed up its number of Linux developers, and those developers will then also work on Office for Linux.
The big question is whether there’s actually significant demand for Office for Linux. On any typical day, Linux has perhaps 1 or 2 percent of the desktop market, and about 53 percent of it is with Ubuntu.
However, it’s still important to remember that almost every Linux distribution comes with Libre Office for free. It’s only an educated guess, but some open-source developers suspect that there are scant few Linux users who would proactively go out and pay for Microsoft Office
After all, many desktop Linux users chose their operating system because it’s free, both in the money sense, and free from Microsoft's deadly grip and mad control of Windows and its desktop software.
The other possibility is that Microsoft could be reacting to increased uptake of Linux and cloud-based productivity suites by large institutions, such as universities, cities, municipalities and governments from all over the globe.
In other open source and Linux news
Since the internet became public at the end of 1993, Apache has ruled the world when it comes to Web servers, and still does, but some say this could soon change. Now, a lesser-known web server, Nginx (pronounced "Engine-X") has quietly taken some market share away from Apache.
And this has been happening to the point that it now owns almost 12.7 percent of the global web server market, and almost 12.8 per cent of the world's most heavily trafficked websites, according to new data from Netcraft.
Of course, and as it often does, it wasn't supposed to be that way. Once an open-source project gains traction, it has tended to continue to gain market share, just like the Linux operating system.
But Apache has been on the wane, losing 100 million hostnames since June 2012, and not because of any resurgence from Microsoft's IIS web server. Apache still claims close to 55.5 percent of all active websites, but Nginx is on the rise, make no mistake.
Started in 2001 by Igor Sysoev, Nginx now powers websites with serious scale requirements like 163.com, Wordpress.com, Yandex.ru, and, last but not least, CNN.com.
Nginx and Lighttpd are probably the two best-known asynchronous servers, and Apache is undoubtedly the best known process-based Web server. The main advantage of the asynchronous approach is scalability. In a process-based server, each simultaneous connection requires a thread which incurs significant overhead.
On the other hand, an asynchronous server is event-driven and handles requests in a single thread. While a process-based server can often perform on par with an asynchronous server under light loads, under heavier loads they usually consume far too much RAM which significantly degrades performance and speed.
Today, Nginx offers fewer features than Apache, but its performance is significantly higher. In that manner, it's not unlike MySQL in the database market, or Tomcat in the application server market, where the open-source alternative is initially feature-constrained but significantly better for a particular purpose.
Over time, it adds functionality and continues to improve performance until, like Linux in the server and mobile operating systems, it simply dominates the rest of the segment.
This used to be the story of Apache which displaced all proprietary players in the web server market. But now its market share is being eaten away, and unless it fundamentally re-architects for better scaling features, Apache may ultimately give way to Nginx or other open-source web servers.
The people at Apache already know this and haven't been sitting idle. Not at all in fact. In early 2012 the Apache Software Foundation released version 2.4, of which ASF president Jim Jagielski declared-- "As far as true performance is based - real-world performance as seen by the end-user- Apache version 2.4 is as fast, and even faster than some of the servers who may be 'better' known as being 'fast', like Nginx."
Despite nearly a year in the market, Apache 2.4 hasn't stemmed its slide. What's so interesting and healthy here is that two open-source projects are fighting for market supremacy in the only way open source really knows how-- technical merit. Pure and simple.
It will be interesting to see what kind of progress Nginx makes in 2013 and if it continues to eat away at Apache's strong market share. We will also be on the lookout to see if other big-name websites adopt it in the near future.
In other Linux and open source news
James Bottomley has restructured Linux's mini bootloader to allow any version to be launched on PCs with UEFI Secure Boot.
The boot loader's development has been sponsored by the Linux Foundation. The revised version uses a different method to boot the more complex secondary bootloader.
This enables it to co-operate with Gummiboot, which was introduced in mid-2012. Gummiboot doesn't load or start Linux itself like GRUB does, instead it accesses EFI mechanisms. This keeps its structure significantly less complex than that of GRUB.
But when Secure Boot is active, the approach requires other firmware-related mechanisms to verify the kernel before it is launched.
In a recent blog post, Bottomley says that as a consequence of this, Gummiboot doesn't work with Shim or the original version of the Linux Foundation's bootloader when Secure Boot is active. Further details can be found in the slides for a presentation given by Bottomley, a member of the linux Foundation's Technical Advisory Board.
In the presentation, he explains that the Linux kernel and the Gummiboot versions should not be verified via keys, and that user-authorised hash values should be used instead.
To provide the functionality, the new version uses some modification that is also part of an extension which was introduced by SUSE Linux developers and has since been integrated into Shim 0.2.
That extension allows Shim to store trusted code information in a MOKs (Machine Owner Keys) database.
According to Bottomley's presentation slides, it takes a week or two for Microsoft to respond to bootloader submissions and provide a signature that is considered trustworthy by Secure Boot PCs.
This means that the difficulties Bottomley encountered when he tried to get an earlier version of his mini bootloader signed last autumn appear to have been eliminated.
Bottomley says that he submitted the revised version to be signed by Microsoft on January 21st, and that he hopes to receive a signed version shortly. The Linux Foundation plans to offer this signed version for download free of charge.
Shim contributor Matthew Garrett has recently also written a blog post on UEFI and Secure Boot. In that post, the developer provides some details about the issues that have caused Samsung notebooks to refuse to start at all after Linux was booted.
He also mentions a few flaws in the UEFI firmware of various Toshiba notebooks that result in the signatures of the Secure Boot-compatible Fedora 18 being considered invalid, which prevents the distribution from starting when Secure Boot is active.
Source: Google

No comments:

Post a Comment