Glass Mapper V4 – Redis Cache Provider

Following on from my post on the new caching functionality in Glass Mapper V4, I thought I would do something that I have been really dying to try, which is to look at using Redis to cache the models generated out of Glass. This from an architectural standpoint for me is just plain amazing :D. In this post, I will show you how you can set up a redis based cache provider for Glass Mapper V4. This post will assume you know how to set up caching on your models (shown in my previous post).

** Note – this is a proof of concept, it has not yet been tested in production, all the regular no warranty disclaimers apply.

Installing Redis (Windows)

In order to set up a local instance of redis on my windows development environment, I used the Microsoft Open Tech – Redis implementation. There is information on their Nuget packages here

Setting up the Redis Connectivity

In order to get connections from the Redis instance, I followed the Azure teams best practice – so my code looked something like this:

        private static readonly Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(ConfigurationManager.ConnectionStrings["glass-redis"].ConnectionString));

        public static ConnectionMultiplexer RedisConnection
        {
            get
            {
                return lazyConnection.Value;
            }
        }

Note that I have used a connection string value to set this up, so you will need to set an appropriate connection string for “glass-redis” in your connectionstring.config file, mine looks like this:

  <add name="glass-redis" connectionString="localhost,abortConnect=false" />

Binary Serialization
It’s worth noting that in order to serialize to binary which is required to use the redis provider, you will have to mark classes with the [Serializable] attribute, this isn’t the case for (more commonly used) interfaces. All types used though in both cases must be binary serializable.

In order to serialize the object, we need to create a binary serializer implementation. I am currently using the standard .net binary serializer but experimenting currently with smaller more lightweight ones.

namespace CardinalIT.Kernel.Caching
{
    public interface ISerializationRepository
    {
        byte[] Serialize<T>(T o);

        T Deserialize<T>(byte[] stream) where T : class;
    }
}
using System.IO;
using System.Runtime.Serialization.Formatters.Binary;

namespace CardinalIT.Kernel.Caching
{
    public class NetBinarySerializationRepository : ISerializationRepository
    {
        public byte[] Serialize<T>(T o)
        {
            if (o == null)
            {
                return null;
            }

            BinaryFormatter binaryFormatter = new BinaryFormatter();
            using (MemoryStream memoryStream = new MemoryStream())
            {
                binaryFormatter.Serialize(memoryStream, o);
                byte[] objectDataAsStream = memoryStream.ToArray();
                return objectDataAsStream;
            }
        }

        public T Deserialize<T>(byte[] stream) where T : class
        {
            if (stream == null)
            {
                return default(T);
            }

            BinaryFormatter binaryFormatter = new BinaryFormatter();
            using (MemoryStream memoryStream = new MemoryStream(stream))
            {
                T result = (T)binaryFormatter.Deserialize(memoryStream);
                return result;
            }
        }
    }
}

The Glass Mapper Redis Cache Manager

Now I have set all the redis connectivity in place, I can go ahead and add a concrete implementation of the Glass ICacheManager interface looking like this:

using System;
using CardinalIT.Core.Logging;
using Glass.Mapper.Caching;
using StackExchange.Redis;

namespace CardinalIT.Kernel.Caching
{
    public class GlassRedisCacheManager : ICacheManager
    {
        private readonly IDatabase db;
        private readonly IServer server;
        private readonly ISitecoreLog log;
        private readonly ISerializationRepository serializationRepository;

        public GlassRedisCacheManager(IDatabase db, IServer server, ISitecoreLog log, ISerializationRepository serializationRepository)
        {
            this.server = server;
            this.db = db;
            this.log = log;
            this.serializationRepository = serializationRepository;
        }

        public void ClearCache()
        {
            foreach (var key in server.Keys())
            {
                db.KeyDelete(key);
            }
        }

        public void AddOrUpdate<T>(string key, T value) where T : class
        {
            db.StringSet(key, serializationRepository.Serialize(value));
        }

        public T Get<T>(string key) where T : class
        {
            if (!Contains(key))
            {
                return null;
            }

            // Ensure that invalid serializations don't kill the caching
            try
            {
                return serializationRepository.Deserialize<T>(db.StringGet(key));
            }
            catch (Exception ex)
            {
                log.LogException(ex, "GlassRedisCacheManager.Get<T>(string, object) failed.");
                return default(T);
            }
        }

        public bool Contains(string key)
        {
            return db.KeyExists(key);
        }

        public object this[string key]
        {
            get { return Get(key); }
            set { Set(key, value); }
        }

        public object Get(string key)
        {
            return serializationRepository.Deserialize<object>(db.StringGet(key));
        }

        public void Set(string key, object value)
        {
            // Ensure that invalid serializations don't kill the caching
            try
            {
                db.StringSet(key, serializationRepository.Serialize(value));
            }
            catch (Exception ex)
            {
                log.LogException(ex, "GlassRedisCacheManager.Get(string, object) failed.");
            }
        }
    }
}

The component registrations (using SimpleInjector) then look something like this:

            Container.RegisterSingle<ISerializationRepository>(() => new NetBinarySerializationRepository());
            Container.Register<ISitecoreLog, SitecoreLog>();
            Container.RegisterSingle(() => RedisConnection.GetDatabase());
            Container.RegisterSingle(GetServer);

            Container.RegisterSingle<ICacheManager, GlassRedisCacheManager>();

Conclusion

I think this is just plain awesome and opens a lot of architectural doors as well as (hopefully) a big potential speed increase for your site :D. There are already plans afoot to look into further caching options in future versions of Glass Mapper, but this is the first stepping stone.

Happy Caching!

Advertisements

4 thoughts on “Glass Mapper V4 – Redis Cache Provider

  1. How big are you finding the generated models? My initial reaction to this is that a redis cache seems a little unnecessary (though the custom caching offering is definitely cool), and I would only look to this level of caching if the data needed to be distributed and was of a sufficient scale that in-memory caching wasn’t suitable.

    • Thank you so much for your feedback Alex.

      Your generated model size is going to be entirely based on how complex your model is. Most Glass models are inherently simple (but I don’t have an actual size I can quote).

      It is worth mentioning that I don’t believe it is best practice in Glass to cache all your models. This option was designed for scenario’s where html caching was not possible.

      It is also worth mentioning that this design has been at least in part been based on articles from Microsoft on using Redis in Azure (not to say that it’s right of course). If they are advocating the storage of model objects in a distributed cache such as redis (on Azure), then they must have some confidence in it.

      In general, for smaller less distributed builds, I agree that in memory caching would probably be my first choice also for its simplicity and if I could provide the serialization using MessagePck / Protobuf this option would be much smaller in size and serialization speed also. However, with the increasing popularity of Azure / AWS, I think this presents an option that I would certainly consider to help globalization of a customers website.

      Of course, with all these things, this is a proof of concept, it is not production code, but I am interested to see where it leads to.

      • I agree that if anything, it’s interesting! Don’t get me wrong, my overall attitude towards this is really positive.

        I was perhaps a bit hasty, with large item-bucket stores and models that aren’t dynamic, then it could be well worthwhile looking at a fast distributed caching store.

      • I am quite excited! I think it definitely provides some interesting architectural options, especially when coupled with (what I hope will soon be) the option to store individual properties (which could well be unserialized – string, int, long) reducing the payload to the redis cache yet still achieving a similar function.

        This could provide a neat dynamic that gives the ability to give high availability content before most of the weight of Sitecore has been applied leaving only 1 of your server cluster to take the hit.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s