ASP.Net 4.0 New Features

ViewStateMode – ViewState for Individual Controls
ASP.Net 4.0 allows view state in a page to be more controllable from page to its child controls level. That is, view state of a control can be enabled or disabled irrespective of its parent control’s view state. Even if view state of a page is disabled, controls of the page can have their own view state individually enabled or disabled or even inherited from the page’s view state mode property.
This property if utilized properly can certainly boost performance of a page.

For example, we can individually enable or disable user control’s view state in a page.

ViewStateMode

By default, ViewStateMode is enabled for a page object, while controls have inherit mode.

Page.MetaKeywords and Page.MetaDescription – SEO Optimization Feature
ASP.Net 4.0 has come up with these two properties that will help developers add meta tags for keywords and description in the aspx pages in easier fashion. Web Search Engines really need these two meta tags for search indexing of any pages. These two properties can be used in a page in various ways. Inside <head> tag or in the code behind or even at <%@Page%> directive level.

However, setting meta keywords or description in code behind will be more useful when we have to add keywords and descriptions dynamically from source like database.

MetaKeyWords is used to store few useful keywords that will briefly highlight important information of a page by tags. From SEO perspective, meta keywords should contain keywords separated by spaces.

MetaDescription is used to add page description in short that will help Search Engines to quickly describe about the page links in search pages.

Prior to ASP.Net 4.0, we have to add meta tags using HtmlMeta control (public class HtmlMeta : HtmlControl) adding into page header as:

protected void Page_Load(object sender, EventArgs e)
{
//
HtmlMeta metakey = new HtmlMeta();
metakey.Name = "keywords";
metakey.Content = "ASP.Net 2.0 3.5";
HtmlMeta metadesc = new HtmlMeta();
metadesc.Name = "description";
metadesc.Content = "ASP.Net 2.0 3.5 Page Description...";
//Add to page header
Page.Header.Controls.Add(metakey);
Page.Header.Controls.Add(metadesc);
}


In ASP.Net 4.0, we can add in many ways.

protected void Page_Load(object sender, EventArgs e)
{
//Adding Page meta tags information
this.Page.MetaKeywords = "ASP.Net 4.0 SEO Meta Tag";
this.Page.MetaDescription = "Serializing and Deserializing Thoughts..";
}

Or,

<head runat="server">
  
<title>Feature: ViewStateMode</title>
<meta name="keywords" content ="ASP.Net 4.0 ViewStateMode"/>
<meta name="description" content="ViewStateMode feature in ASP.Net 4.0" />
</head>

Or inside Page directive,

MetaDescription

Response.RedirectPermanent – Search Engine Friendly Webpage Redirection

In classic ASP or ASP.Net earlier than 4.0, we used to redirect to new pages or links by setting Response.StatusCode to 301 before calling Response.AddHeader method. Now ASP.Net 4.0 has provided Response.RedirectPermanent method to redirect to new pages or links with StatusCode of 301 implicitly set. Search Engines use this 301 code to understand permanent redirection from old pages links.

For example,

Classic ASP method:

<%@ Language=VBScript %>
<%
Response.Status="301 Moved Permanently"
Response.AddHeader "Location","http://www.new-page-url.com/"
%>


ASP.Net method prior to 4.0:

<script runat="server">
private void Page_Load(object sender, System.EventArgs e)
{
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location","http://www.new-page-url.com");
}
</script>


ASP.Net 4.0 method:

Response.RedirectPermanent("http://www.new-page-url.com ");


Web.Config Refactoring – Custom HttpHandlers and HttpModules

Web.config now looks cleaner as most of the settings are controlled from machine.config file as ASP.Net 4.0 is all set to benefit from IIS 7 and IIS 7.5 features. When IIS is set to use .Net 4.0 and Integrated Pipeline mode, <compilation> element holds .Net version attribute. And the traditional <httpHandlers> and <httpModules> section is now shifted out of <system.web> and added inside new section <system.webserver>. All the custom handlers are added inside <handlers>, and all the modules inside <modules> section.

<system.webServer>
<!-- Add the module for Integrated mode applications -->
<modules runAllManagedModulesForAllRequests="true">
<add name="MyModule" type="WebAppModule.MyCustomModule, WebAppModule" />
</modules>
<!-- Add the handler for Integrated mode applications -->
<handlers>
<add name="MyHandler" path="svrtime.tm" verb="GET" 
<type="WebAppModule.MyCustomHandler, WebAppModule"
preCondition="integratedMode" /> </handlers> </system.webServer>


Also,

<system.web>
<compilation debug="true" targetFramework="4.0" />


Interesting point is, when we add custom handlers and modules this way, we do not have to manually configure handlers and modules in IIS again. IIS will automatically refresh itself.

Advertisements

Exception Handling in WCF

We have been doing exception handling in managed application using try-catch block with Exception or its derived Custom Exception objects. But this mechanism is very much .Net Technology specific. When we develop SOA applications, our application is not limited to mere one technology or single loyal client. So the communication process of this service or service method level errors to client via wire becomes a little bit tricky. WCF has two types of error handling mechanism: one is by as usual Exception objects, and other is by SOAP fault message. SOAP fault is used to marshall .Net exceptions to client in much readable and convenient way to support interoperability. With use of SOAP fault, the verbose exception message is reduced to Code and Message. For this System.ServiceModel namespace comes FaultException class and FaultContract attribute.

Let’s come to see from example on how to do exception handling in WCF application. Before this, write our service first.

namespace WcfSvc
{
[ServiceContract]
public interface IBasicMathService
{
[OperationContract]
int Subtraction(int x, int y);

[OperationContract]
int Multiplication(int x, int y);

[OperationContract]
[FaultContract(typeof(BasicMathFault))]
int Addition(int x, int y);
}

[DataContract]
public class BasicMathFault
{
//
[DataMember]
public string Source;

[DataMember]
public string ExceptionMessage;

[DataMember]
public string InnerException;

[DataMember]
public string StackTrace;
}
}

And its implementation is as:

public class BasicMath : IBasicMathService
{
public int Addition(int x, int y)
{
//
int result = 0;
try
{
result = (x + y);
}
catch
{
BasicMathFault ex = new BasicMathFault();
ex.Source = "BasicMath.Addition method";
ex.ExceptionMessage = "Could not perform addition operation.";
ex.InnerException = "Inner exception from math service";
ex.StackTrace = "";
//Throwing strongly-typed FaultException
throw new FaultException(ex, new FaultReason("This is an error condition in BasicMath.Addition method")); }

return result;
}

public int Multiplication(int x, int y)
{
//Due to some calculation error condition, let’s assume we are throwing this error.
//Throwing simply FaultException
throw new FaultException(new FaultReason("Error occurred while processing
for the result"), new FaultCode("mutliplication.method.error"));
}

public int Subtraction(int x, int y)
{
//Exception we generally throw in managed application in the form of Exception object
throw new NotImplementedException("Method still not implemented");
}
}

This is our typical service related code. If we see IBasicMathService interface and its implementation in BasicMath class, we have:

Addition(x,y) method decorated with FaultContract attribute in IBasicMathService class,

Subtraction(x,y) method using simple Exception throwing mechanism,

Multiplication(x,y) method using simple FaultException object, and

Addition(x,y) method using strongly-typed fault of type BasicMathFault in FaultException object

So what does all this mean to client, and how exception is transmitted to client? Let’s answer with these three examples in our client code.

A) Throwing Simple Exception

private void SubtractIntegers()
{
try
{
obj = new BasicmathServiceRef.BasicMathServiceClient();
int result = obj.Subtraction(10, 15);
}
catch (Exception ex)
{
Response.Write(ex.Message + "
");
}
}

When this method is called, client receives verbose error message from WCF as:

“The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework 3.0 SDK documentation and inspect the server trace logs.”

If we closely see this error information, we come across two things: turn on\off IncludeExceptionDetailInFaults value either through ServiceBehaviorAttribute of the class containing this method, or modify IncludeExceptionDetailInFaults value in configuration file or section of this service.

Either of these two things is pretty easy.

a) Decorate BasicMath class as:

[ServiceBehavior(IncludeExceptionDetailInFaults = false)]
public class BasicMath : IBasicMathService
{

b) Or, modify in config file

By default this key value is false. If we make it true, the verbose error message that we received will be reduced to human readable message that we passed in constructor of NotImplementedException.

“Method still not implemented”

While debugging WCF exception, one may encounter error in the service like “xyz exception unhandled by user code”. This is some what misleading, but no need to worry.

B) Throw exception of FaultException type

private void MultiplyIntegers()
{
try
{
obj = new BasicmathServiceRef.BasicMathServiceClient();
int result = obj.Multiplication(10, 15);
}
catch (FaultException ex)
{
Response.Write(ex.Message + "");
}
}

On calling this method, WCF will serialize the exception as a Fault message and return to the client as:

“Error occurred while processing for the result”

However, client is unlikely to receive verbose error message if we throw exception of type FaultException even key IncludeExceptionDetailInFaults is true or false. If we see the exception thrown code,

throw new FaultException(new FaultReason("Error occurred while processing for the result"), new FaultCode("mutliplication.method.error"));

we have used FaultCode. Client can use this specific fault code contained in FaultException code to take decision, but this approach becomes more of procedural by many if-else condition to branch out code some thing like:

if (ex.Code.Name == "mutliplication.method.error")
{
Response.Write(ex.Message + "");
}

C) Throwing with strongly typed fault

private void AddIntegers()
{
try
{
obj = new BasicmathServiceRef.BasicMathServiceClient();
int result = obj.Addition(10, 15);
}
catch (FaultException ex)
{
Response.Write(ex.Message + " ");
}
}

With this approach, client will be able to explicitly handle fault of only that type whose service method is to be used by client. Here, we are using BasicMathFault type. At the service level, the specific method has to be decorated with FaultContract attribute so that exception can be serialized as:

[OperationContract]
[FaultContract(typeof(BasicMathFault))]
int Addition(int x, int y);

The detail of fault type is up to our convenient level to let WCF serialize only needful information to client.

When we call this Addition(x,y) method, one may receive this error message if an exception occurs.

“This is an error condition in BasicMath.Addition method”

Thus, we see how we can do exception handling in WCF.

Designing Business Logic Layer: Some Guidelines

Business Logic Layer is a very crucial layer for any data base applications. A timely thought when applied to this layer from the beginning of application layers design can save lots of time and complexity. Software architects divide the software into modules, then different layers, and core functioning layers for important application features. But when actual development work starts, complexity of different layers and modules start crawling into code gradually.
Reason being:
• We try to mix business rules of different modules wishfully
• Writing methods with abundant codes
• Not clearly separating responsibilities of presentation and data access layers
• Creating code duplicity, i.e.; writing same set of code or methods at various places

Results are:
• Difficult to debug
• Difficult to understand the flow
• Difficult to maintain and modify business rules correctly when such rules exist across layers and modules
• Difficult to write Unit Tests

We can avoid these things if we take care of these things when writing codes.
• Write methods that do single meaningful task with one call. Do not mix other code logic with the methods. For example, if we write SavePayment() method, then this method should only focus on save task, and not update or delete or check connection status or read Xml files, etc. This is what we call Single Responsibility Principle.
• Encourage use of factory methods for object creation instead of writing lots of If-else constructs based upon some input type values.
• When you need data or result sets (DTO) of other modules, then preferably call business logic layer methods of that module instead of writing that module code logic into yours. This is quite important aspect for any business logic layer.
• Classes in this layer should be loosely coupled. For this different injection patterns like dependency injection or inversion of control, etc can help. Sometimes even a simple Enum type can come to a great rescue.
• Write business methods that accept valid entity class object or DTO object in business rather than single valued parameters like integer or string or array or even optional params. This ensures business logic layer code function even unmodified when there are database table and entity or DTO class changes in behind.
• Avoid lots of business rules in stored procedures or even in presentation UI.
• Business logic layer methods should not be aware of presentation UI controls’ properties or values. These methods should accept values in integer or string instead.

Let me explain all these points by one example. I worked in an accounting module of a project where customers can make payments of their bills in various ways. They can make either full payment or in-partial or even in installments. For each payment mode, there were different rules and validations. So this module had clear separation of implementation with rules of each mode functioning without depending upon others. This way our lots of coding and debugging time got saved.
Let’s see the code snippets.

Enum showing different Payment Mode

public enum PaymentMode

{
Normal,
Part,
Installment
}

Custome Bill DTO

public class CustomerBillDTO

{
private Int64 intBillNo;
private Int16 intBillMonth;
private Int16 intBillYear;
private double dblBillAmount;
private string strCustomerID;
private Int64 intPayAmount;
//And many other fields…
}

Payment Processor factory class

interface IPaymentProcessorFactory
{
//
IPaymentProcessor GetPaymentProcessor(PaymentMode mode);
}

public class PaymentProcessorFactory : IPaymentProcessorFactory
{
//
private IPaymentProcessor objPaymentProcessor = null;

public IPaymentProcessor GetPaymentProcessor(PaymentMode mode)
{
//
switch (mode)
{
case PaymentMode.Normal:
objPaymentProcessor = new NormalPaymentProcessor();
break;

case PaymentMode.Part:
objPaymentProcessor = new PartPaymentProcessor();
break;
case PaymentMode.Installment:
objPaymentProcessor = new InstallmentPaymentProcessor();
break;
}
return objPaymentProcessor;
}
}

Different Payment Processor class

public interface IPaymentProcessor

{
//
bool SavePayment(CustomerBillDTO Bill);
}

public class NormalPaymentProcessor:IPaymentProcessor
{
//
public bool SavePayment(CustomerBillDTO Bill)
{
return true;
}
}

public class PartPaymentProcessor : IPaymentProcessor
{
//
public bool SavePayment(CustomerBillDTO Bill)
{
return true;
}
}

public class InstallmentPaymentProcessor : IPaymentProcessor
{
//
public bool SavePayment(CustomerBillDTO Bill)
{
return true;
}
}

Main class that processes each payment

class PaymentProcess

{
private IPaymentProcessorFactory objProcessor = null;
public PaymentProcess(IPaymentProcessorFactory Processor)
{
//
this.objProcessor = Processor;
}

public bool ProcessPayment(CustomerBillDTO Bill, PaymentMode mode)
{
//
IPaymentProcessor objPaymentProcessor = this.objProcessor.GetPaymentProcessor(mode);
return objPaymentProcessor.SavePayment(Bill);
}
}

At the calling end, we simply make a generous call as:

private void BtnSave_Click(object sender, EventArgs e)
{
//
PaymentProcessorFactory objFactory = new PaymentProcessorFactory();
PaymentProcess objProcess = new PaymentProcess(objFactory);
bool result = objProcess.ProcessPayment(objCustomerBillDTO(), PaymentMode.Normal);
}

As we see this is how we have clearly separated each logical functioning of a SavePayment() method.
Even in future, if Part or Installment payment mode is stopped, we do not have to modify the code logic to add any If-else construct to branch out or skip any code flows. In case a new payment mode is added, then writing a new XModePaymenetProcessor class, adding one more Enum value and finally one more object instantiation code in factory class will do enough.

Adding or removing any Bill or Customer related fields in CustomerBillDTO do not even pose threat to this business logic layer.

Finally, one should always keep in mind that you write class and class methods for others. So you should be very clear here: what the class should offer and how.

Thanks.

Some Hidden Facts: Stored Procedure and Its Optimization

December 26, 2009 2 comments

We create stored procedures in database applications for several reasons and benefits like enhanced performance, security, code maintenance, etc. But with time we may see that these stored procedures are not performing well as expected. There can be many reasons like dependent objects (tables, indexes, execution plans, and data size) changed or stored procedures executing improperly. So we have to be a little bit careful from the beginning of creating stored procedures to execution.

I like to summarize few points about all these things.

A. Stored Procedure Recompilation
When creating a stored procedure, we can specify WITH RECOMPILE option in it. But such stored procedure will never benefit from the cached execution plan as each time it is executed; it forces the cached execution plan to invalidate or flush and create new plan based upon the parameters passed, if any, to it. I do not see any such big benefits of this option. But one may find this useful when such stored procedure will return results or execute only selective part of the stored procedure body based upon supplied input parameters. For example, statements within If-block or Select-Case block based upon input parameters.

But I still feel one should go after the new feature of SQL Server 2008 that helps recompile statement level queries rather than whole stored procedure. But this option will be quite useful as such recompilation is dependent upon input data. Let’s say if you are executing the stored procedure by supplying input parameter like ‘FirstName’ or ‘LastName’ or ‘DateOfBirth’ at a time, then statement-level query recompilation option is better. To use this method, one has to specify that SQL-statement within stored procedure with RECOMPILE query hint.

For example,
CREATE PROC dbo.uspExample
@x_input AS INT,
@y_input AS INT
AS
If @x_input = 1
BEGIN
SELECT x, y, z
FROM dbo.tblXYZ
WHERE xColumn >= @x_input
OPTION(RECOMPILE);               —See here the query hint
End
If @y_input = 2
BEGIN
SELECT x, y, z
FROM dbo.tblXYZ
WHERE yColumn >= @y_input
END
GO

Another tool that we can use is sp_recompile system stored procedure. This procedure forces recompile of user defined stored procedure next time it is run.

Let’s look at its syntax first.
Sp_recompile ‘@somedependent_object’;

Here, ‘@somedependent_object’ can be either table or view or another stored procedure or even trigger. If this ‘@somedependent_object’ is table name, then sql server will compile all the stored procedures that references this table. If this object name is some stored procedure, then this stored procedure will recompile next time it is run.
This option is good when the table properties are changed, and this table is in use of many other stored procedures. Instead of recompiling each and every such depending stored procedure, a simple sp_recompile will do enough, and with no server restart!

But there are many other scenarios when stored procedure recompilation can happen automatically. If server is running out of memory, the execution cache will get flushed. If the stored procedure has some session specific SET modifiers (like LOCK TIMEOUT, DATEFIRST, ANSI_WARNINGS, etc), then also stored procedure can recompile.

Situations like when we mix DDL and DML statements together inside stored procedure may also cause stored procedure recompilation. For example, most of us create temporary tables inside stored procedure. Then, do DML operations based upon values in these temporary tables. Even this type of stored procedure is forced to recompile to create new plan according to such new temporary tables. For this type of stored procedure, one should opt for statement-level recompilation by using RECOMPILE hint in that DML statement following DDL statements.

But what is alternative to temporary tables here? We can use table variables instead! Or, if we cannot do without temporary tables, then we should write such DDL statements in the beginning of stored procedure body itself so that multiple recompilations are not happening for a single call to this stored procedure.

So stored procedure recompilation can be harmful and useful both depending upon situations. One has to think twice whether recompilation is required or not.

B. Stored Procedure Name
We should always create a stored procedure with a full naming convention. Schema name should always prefix the stored procedure name. Schema name will help sql server name resolution easily when it is called. This helps sql server in which schema to query the stored procedure.

C. Table Indexes
Tables should have proper indexes and compiled time to time as indexes may be weird off after some time due to huge data insertion or deletion.

This is all about some useful facts about stored procedure.

Calling Method in Parent Page from User Control

October 10, 2009 Leave a comment

In ASP.Net, we develop custom user control as a reusable server control independent of any containing parent aspx page. User control has its own public properties, methods, delegates, etc that can be used by parent aspx page. When a user control is embedded or loaded into a page, the page can access public properties, methods, delegates, etc that are in user control. After loading the user control, there a situation may arise like calling methods in page itself. But when a user control is developed, it has no knowledge of containing page. So it becomes a trick to call the page method.

In .Net, Delegate class has one method DynamicInvoke. DynamicInvoke method is used to invoke (late-bound) method referenced by delegate. We can use this method to call a method in parent page from user control. Let’s try with this example.

First create a user control called CustomUserCtrl. Its code will look some thing like this:

public partial class CustomUserCtrl : System.Web.UI.UserControl
{
private System.Delegate _delWithParam;
public Delegate PageMethodWithParamRef
{
set { _delWithParam = value; }
}

private System.Delegate _delNoParam;
public Delegate PageMethodWithNoParamRef
{
set { _delNoParam = value; }
}

protected void Page_Load(object sender, EventArgs e)
{
}

protected void BtnMethodWithParam_Click(object sender, System.EventArgs e)
{
//Parameter to a method is being made ready
object[] obj = new object[1];
obj[0] = "Parameter Value" as object;
_delWithParam.DynamicInvoke(obj);
}

protected void BtnMethowWithoutParam_Click(object sender, System.EventArgs e)
{
//Invoke a method with no parameter
_delNoParam.DynamicInvoke();
}
}

Then add this user control into an aspx page. The code behind of this page is as:

public partial class _Default : System.Web.UI.Page
{
delegate void DelMethodWithParam(string strParam);
delegate void DelMethodWithoutParam();
protected void Page_Load(object sender, EventArgs e)
{
DelMethodWithParam delParam = new DelMethodWithParam(MethodWithParam);
//Set method reference to a user control delegate
this.UserCtrl.PageMethodWithParamRef = delParam;
DelMethodWithoutParam delNoParam = new DelMethodWithoutParam(MethodWithNoParam);
//Set method reference to a user control delegate
this.UserCtrl.PageMethodWithNoParamRef = delNoParam;
}

private void MethodWithParam(string strParam)
{
Response.Write("<br/>It has parameter: " + strParam);
}

private void MethodWithNoParam()
{
Response.Write("<br/>It has no parameter.");
}
}

BtnMethodWithParam and BtnMethowWithoutParam are two different buttons on the user control that are invoking the methods in the parent page. On Page_Load of the page, we are setting the references of page class methods to delegate type properties in the user control. Click different buttons of user control, you will see MethodWithParam(string strParam) and MethodWithNoParam() methods called.

This is all we have to do to call page class methods from user control in ASP.Net.

Load ASP.Net User Control Dynamically Using jQuery

October 10, 2009 5 comments

Today we will explore the way of loading ASP.Net user control at run time using jQuery. jQuery has one method load(fn) that will help here. This load(fn) method has following definition.

load (url, data, callback): A GET request will be performed by default – but if any extra parameters are passed, then a POST will occur.
url (string): URL of the required page
data (map – key/value pair): key value pair data that will be sent to the server
callback (callback method): call back method, not necessarily success

Now comes custom HttpHandler that will load the required user control from the URL given by this load(fn) method. We all know that it is either in-built or custom HttpHandler that is the end point for any request made in ASP.Net.

Let’s see by example. In the ASP.Net application, add one aspx page and user control. Then, add one more class derived from IHttpHandler. The aspx html markup will look something like this.

<html xmlns=”http://www.w3.org/1999/xhtml&#8221; >
<head runat=”server”>
<title>Load ASP.Net User Control</title>
<script src=”jquery-1.2.6.js”></script>
<script>
$(document).ready(function() {
$(“#BtnLoadUserCtrl”).click(function() {
$(“#UserCtrl”).load(“SampleUserCtrl.ascx”);
});
});
</script>
</head>
<body>
<form runat=”server”>
<div>
<br />
<input value=”Load User Control” /> <br />
<div id=”UserCtrl”></div>
</div>
</form>
</body>
</html>

The code is quite readable. On the click event of BtnLoadUserCtrl button, SampleUserCtrl.ascx user control is being tried to load in the <div> element having id UserCtrl.

Then, write our custom Httphandler called jQueryHandler as below.

public class jQueryHandler:IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
// We add control in Page tree collection
using(var dummyPage = new Page())
{
dummyPage.Controls.Add(GetControl(context));
context.Server.Execute(dummyPage, context.Response.Output, true);
}
}

private Control GetControl(HttpContext context)
{
// URL path given by load(fn) method on click of button
string strPath = context.Request.Url.LocalPath;
UserControl userctrl = null;
using(var dummyPage = new Page())
{
userctrl = dummyPage.LoadControl(strPath) as UserControl;
}
// Loaded user control is returned
return userctrl;
}

public bool IsReusable
{
get { return true; }
}
}

Do not miss to add this HttpHandler in the web.config.

<httpHandlers>
<add verb=”*” path=”*.ascx” type=”JQUserControl.jQueryHandler, JQUserControl”/>
</httpHandlers>

This web.config configuration tells that jQueryHandler will process request for file type having .ascx extension and methods all (GET, POST, etc). The type attribute value is something like:
type=”Namespace.TypeName, Assembly name where Handler can be found”

Now we are ready to test our sample. Run the page, and see on the click of button, the sampleusertCtrl.ascx is loaded.

I hope we can now extend this concept to fit any such programming requirement in future.
Happy Coding!

CacheItemRemovedCallback Example in ASP.Net

September 20, 2009 Leave a comment

Notify When an Item is Removed from Cache in ASP.Net

While adding or inserting an item into cache object, we add dependency object as well to ensure the cache is automatically invalidated if any change is detected in that dependent object like file, for example. Then, we again read or update that item from the original source to make sure the item is still fresh with data. This is one of the main reasons that really appeal the use of ASP.Net caching feature where one can decide about dependencies and expiry time policy. There are other properties also that can be used in combination to set the scope of cached object within time frame and location. See also HttpCacheability .

But today we will explore the CacheItemRemovedCallback delegate provided by ASP.Net. It is used to notify the application about cache removal or deletion with some reason. CacheItemRemovedReason enumeration is used as a parameter in call back method to tell the appropriate reason of removal.

Let’s take an example to know more about the CacheItemRemovedCallback delegate.

protected void Page_Load(object sender, EventArgs e)
{
//Fetch item list from cache
ArrayList cacheditems = CachedItemList();
}

private static CacheItemRemovedCallback OnCachedItemRemoved = null;
private ArrayList CachedItemList()
{
//
OnCachedItemRemoved = new CacheItemRemovedCallback(this.CachedItemRemovedCallback);
ArrayList cacheditems = HttpContext.Current.Cache.Get("CACHE_KEY") as ArrayList;

// Found in cache
if (cacheditems != null)
{
return cacheditems;
}
else
{
// Not found in cache
cacheditems = ItemList();
HttpContext.Current.Cache.Insert("CACHE_KEY", cacheditems, new System.Web.Caching.CacheDependency(Server.MapPath("~//CacheDependentFile.txt")), Cache.NoAbsoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.Default, OnCachedItemRemoved);

return cacheditems;
}
}

private void CachedItemRemovedCallback(string key, Object val, CacheItemRemovedReason reason)
{
//
if (reason == CacheItemRemovedReason.DependencyChanged)
{
// Log the cache key name, reason and time details
// when the cached object was removed from the cache

}
}

private ArrayList ItemList()
{
//
ArrayList lst = new ArrayList();
lst.Add("First Item");
lst.Add("Second Item");
lst.Add("Third Item");
lst.Add("Fourth Item");
lst.Add("Fifth Item");
return lst;
}

Let’s see carefully this call back method:

private void CachedItemRemovedCallback(string key, Object val, CacheItemRemovedReason reason)
{
//

}

The first parameter specifies the cache key name that we used to store an item (an ArrayList collection values). Second parameter is the object that we stored in the cache. The third parameter is an enumeration which has enum values like Removed, Expired, Underused, and DependencyChanged.

In the above example, if any change is made in CacheDependentFile.txt file, the call back method is automatically fired, and the reason captured will be DependencyChanged. Try and see.

Important Point: When using a CacheItemRemovedCallback make sure that you make the callback method ("OnCachedItemRemoved " in the sample above) a static method.

This feature can be used in many cases like logging the reason why any cached item was removed from the cache, and many others depending upon the scenario.