A lot of websites use somekind of paging mechanish to limit the results shown. Either to save bandwidth where results are served in pages or because search criteria should be made more specific by not having paging but just limit the maximum records returned.
The datagrid supports paging but this paging mechanish first fetches all records from the database and then storing the datatable in the viewstate and only showing a subset of that data to the user.
Depending on the type of data either a table or index scan will be performed by the database. If you have a filter/where clause then hopefully you will have a matching index that is also ordered as expected.
- A lot of network traphic between webserver and database
- Large webpages because of the viewstate
A workaround to limit the viewstate is by reloading the dataset at each page hit and disabling viewstate on the datagrid.
Instead of letting the database return all records we ask it to return only a subset of records. Most applications work with a current page and a page size. So when your page size is 10 and the current page is 2 then rows 20-29 should returned. Some databases have native support for paging like MySql with its LIMIT keyword but paging really is hell with older Microsoft SQL Server versions and with SQL 2005 you can easily hack paging in a query by using the ROW_NUMBER() function and AFAIK SQL 2008 does not add functionality to make paging just as easy as MySql's LIMIT statement.
So what you actually want is to not be bothered with such code and the easiest way to do that is by either making use of linq to sql, the entity framework, nhibernate or any other database abstraction layer.
NHibernate for example has the SetMaxResults and the SetFirstResult methods. These are so easy to use to fetch a certain 'page'.
Linq to sql
Linq to sql has the similar methods Skip and Take and work in the same way as NHibernates methods.
27 IQueryable<T> source;
- Quite fast if good indexed are available in the database.
- Only the needed data is tranfered from the database to the webserver
- Paging properties must be passed from front-end to back-end
- Cannot use most ASP.NET controls that support paging
But even this can have performance issues. Imagine a table that a contains billions of records. A query that performs an index scan will be slow too! So you have to be sure that your query will use one or more indexes but that it will not be performing an index scan. For example when you have a LIKE '%x%' statement. Depending on the database you use this can even result in a table scan.
When for example you are viewing a log that results in paging a log table then it is highly imaginable that log rows are being added while you are browsing the log. So when you are watching page 0 and waiting for a couple of seconds and then viewing page two could result in even newer data being shown than on page 0 because data has been added. This is due to the results not being cached and the results ordered descending on creation data (or identity).
What you would like is that when you view page 0 that page 1 will contain older data (in this log example). Here paging like described previously will not work.
The first (0) page query would be something like:
top 20 id, title, status
Then the query for the second page (1) would be needing the id value of the LAST row of the previous query result
top 20 id, title, status
The solution here is not making use of a "page" but only a "next" set. This is exactly what you need in most situations and if you are not sure about this then I can refer to an excellent application that (I think) is making use of this mechanism, Google Reader. You scroll down and then data is fetched at the same time and all previous downloaded items are not removed.
- Easy to implement
- Page result is what you would expect
- Sorting will always result in viewing the result at the first 'page'
- Requires unique indexes / combined columns
Supporting non unique indexes / order by clause
So what if your table does not have a unique index for the where ORDER clause that you specified? Imagine a result where the filter column is a "name", the page size is 10 but you have 16 people sharing the same name. This would result in a page only showing the first 10 people and the next page showing people with another name. The viewer will miss 6 people! It is this reason that we want to see all ties in the result.
It could be that you do not want to extend the ORDER BY because that could result in a table scan because the ORDER BY needs data not available in the index.
This can be solved by making use of the sql keyword WITH TIES. This makes sure that the result will always contain all rows matching the where in combination with the order clause.
top 10 WITH TIES name, dob, county, city, accountnr
This query will result in a page containing at least 16 people in the situation I described above. The page will be larger then 10 but this will not be a problem in an application that fetches data like google reader.
- Very fast and most efficient usage of indexes.
- There it not a standard SQL syntax. Most vendors support similar behaviour (see http://troels.arvin.dk/db/rdbms/).
- Not supported natively by frameworks like NHibernate, Linq to sql, EF
Jeremy has a nice post called what's your worst day as a coder and I needed to post my own list:
- When I need to fix a bug in a module and opening a 7500+ LOC (non generated) class file (that really happened somewhere in the past).
- When I interview a candidate to find out he cannot reverse an array.
- When I'm using a designer for more then 10% of my time.
- Would forced to use sourcesafe.
- When I broke the build.
- Would not code a single line.
Just a brainflush. Will probably update the post in the near future :)
Just fixed a very nasty bug in one of the applications I'm developing. In the database world there is a name for this bug which at the moment does not come up in my mind, I think it is called a 'lost write' but I'm not sure.
Somewhere in a class I did the following based on a passed username:
var profile = ProfileBase.Create(username);
This code (by accident) was fired by the same user that requested the page. ASP.NET's default behavior for profiles is that it loads the current user profile at page start and saves it at the end. And here lies my problem!
The code above first writes the updated "test" value but then the page overwrites it with the old version which looks like nothing had happened.
So the lesson learned today is to first check if the current user is the same user I want to modify the profile for and if so use the profile instance from HttpContext.Profile. Another lesson learned is that ASP.NET profiles don't use optimistic concurrency control to prevent such write patterns.
The last couple of days I've tried to see if it was possible to make use of standard http compression on WCF services as I had very good experiences with WSE and even asmx webservices that used http compression to save bandwidth - but more important - and have better response times for large xml messages.
Tracing request/response with Fiddler
I created a small test project containing a contract and both a client and a service based on that contract and launched fiddler. I discovered that WCF does NOT add the Accept-Encoding: header with gzip and/or deflate values. So I fired the request again from fiddler but now I manually entered the Accept-Encoding header and the result was not compressed so I needed to first enable http compression in IIS.
Enable IIS http compression for WCF services
To enable http compression in IIS (6) you need to do the following:
- Launch the IIS mmc snapin
- Select the properties of Web sites
- Go to the Service tab
- Enable Compression application files
- Select OK
This enables http compression for dynamic content but now we need to let IIS know which dynamic content to compress. I will do this with the commandline adsutil.vbs script. Here I add http compression for .svc files and also change the default compression level from 0 to 9.
CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/GZIP/HcScriptFileExtensions "asp" "dll" "exe" "svc"
CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/DEFLATE/HcScriptFileExtensions "asp" "dll" "exe" "svc"
CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/GZIP/HcDynamicCompressionLevel 9
CSCRIPT.EXE ADSUTIL.VBS SET W3Svc/Filters/Compression/DEFLATE/HcDynamicCompressionLevel 9
After changing these settings you will need to restart IIS by launching iisreset.
Add Accept-Encoding http header to request
Here we have two possibilities. The first is the easiest by just adding the property to the operation context as shown below.
1: using (new OperationContextScope(client.InnerChannel))
3: var properties = new HttpRequestMessageProperty();
4: properties.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip,deflate");
5: //... call here
The second option is creating a client message inspector by inheriting from IClientMessageInspector and configure WCF to use the message inspector for a certain end-point. The message inspector would do exactly the same as the code above but then within the WCF pipeline. I will leave that as nice exercise ;-)
I launched my client and there I saw that the Accept-Encoding header was set in the request and that the result now got compressed by IIS but there WCF failed to decompress the result automatically so I started searching for a EnableDecompression like setting that web references have on the WCF client proxy and other WCF pipeline related settings but could not find a way to let WCF decompress the gzip/deflate response. The WCF technology samples contain a compression example and I modified it a bit. It uses gzip but that did not seem to work in fiddler so I used deflate in the example code and that worked and I also changed the code to NOT alter the content-type. Here I learned that you cannot change the transport properties in a MessageEncoder. But this is only needed when you want to compress your request or compress your response if you do not want IIS to do it for you (because you don't use IIS). So I used a message inspector implementing IClientMessageInspector and IDispatchMessageInspector to automatically set the AcceptEncoding and ContentEncoding http headers. This was working perfectly but I could not achieve to decompress the response on the server by first detecting the "ContentEncoding" header thus I used the work around to first try to decompress it and if it fails just try to process the request as normal. I also did this in the client pipeline and this also works.
I then tried to see the behavior of Silverlight as it delegates the http request to the browser to do the actual call and there I noticed that http compression works perfectly with Silverlight on a service reference. So the browser probably decompressed the response before Silverlight will get the data stream. Which accidentally is perfect for the scenarios in which we want to make use of http compression.
I now ditched this as we also need to support clients that do not set the AcceptEncoding header so I really need the ability to read the http header in request and set a "context" value to (not) compress the response and I have not found out yet how to do that per multiple concurrent requests. I really recommend to use the IIS http compression and not try to do this with WCF hacking as described here! If you want to compress your request than this is probably the only way to achieve this in the current WCF version.
If you want to make use of http compression in .net clients to decompress the response then just use a web reference instead of a service reference and make use of the EnableDecompression setting except when you use Silverlight as the client.