17/08/2011

Eradicate Microsoft Exchange, YES WE CAN!

It was April 2010, the Exchange 2003 server is bloated, everybody is complaining, even Certified Microsoft engineers didn´t know what was the cause. I was dying to ditch this piece of crapware, and now I have the perfect excuse.

I have the fortune to have a very bright and open minded CEO, and after a talk with her I had "cart blanche". The only thing that she demand was to keep the costs at a minimum, and I told her: We only have to pay the hardware, no software licenses. We'll go FOSS all the way.

Several months earlier I had been searching for alternatives, and the one that catch my eye was Zimbra. It had everything a decent Collaboration Suite must have with a plus that it offers tools for migrating from Exchange.

So everything was in place, just have to wait for the new server to arrive.

In that time Zimbra was not compatible with Ubuntu 10.04, so I went with CentOS. I always have been a Debian and derivates guy, and never liked much the Red Hat way, but it was the safe choice.

Well, the migration was nothing but a success, I only have issues with Exchange Public Folders, some of the Contacts were lost. The migration tools Zimbra provide are splendid, and I bet they are even better now.

You have two options for Zimbra, the Open Source version witch is free and the paid Network one.

My plan was to even further eliminate more Microsoft over priced products. The free version lacks the Outlook connector. No worry, we'll go browser way, and that also gives me the change to obliviate Internet Explorer in favor of Google Chrome.

So no Outlook, no IE, enter Open Office. Despite some initial complaints, people get used to the new setup. Some still use Office XP, some Open Office.

The biggest drawback of the free version is the lack of a backup/recovery solution. You have to write a Shell script to make the backups (look in Zimbra Wiki).

Missing is also sync with mobile devices, but the web version is very good, I use it on my iPhone/iPad and works very well.

The Zimmlets are little javascript programs you can write to enhance  Zimbra, here the possibilities are immense.

Updates are super simple, just unpack run the script and the new version is installed.

Also you can start with this version and migrate to the paid one keeping everything, in case you need the extra features.

So yes, you can get rid of that piece of junk called Exchange, at no cost.

 

 

16/08/2011

Zend Cache

For my API (I'll write a post about that someday) I have a request that involve fetching all jobs that were produce in one year.
As you imagine, this is resource intensive and takes a lot of time to fetch. So it was time to use Zend Cache.
All my previous work is done inside an intranet, and everything is quite snappy, so I never have a chance to use this component from Zend Framework.
I decided to test two types, one with Memcached and the other form file. For my surprise or not the file version is as fast as the Memcache one, at least in this situation. And so I choose the file one. Minus one configuration to do.
As with almost ZF components the Zend Cache is very straightforward to configure.
We´ll start with the Bootstrap configuration:
protected function _initCache()
{
$frontendOptions = array(
'lifetime' => 259200 ,
'automatic_serialization' => true,
);

$backendOptions = array(
'cache_dir' => APPLICATION_PATH . '/../tmp/');


$cache = Zend_Cache::factory('Core', 'File', $frontendOptions, $backendOptions);

Zend_Db_Table_Abstract::setDefaultMetadataCache($cache);

Zend_Registry::set('API_Cache', $cache);

}

Pretty simple hein!?
We have the configuration of the frontend e backend options. In the lifetime the measure is in seconds. In my case the cache is valid for 3 days. The backend is the dir where the files will be stored.
Now use the factory method to store the options, set the Default Zend_Db_Table MetadataCached for speed gains, and store the cache variable in the registry.
Using the Cache
Below is part of my REST Class
/**
* Cache Object
* @var Zend_Cache
*/
private $cache;

public function __construct()
{
.......
    $this->cache = Zend_Registry::get('API_Cache');
}

.......

if(!($job = $this->cache->load('get3'.$code)))
{
    $prod = $this->embalagem->getJobParamsByClientCode($code);
    foreach ($prod as $value)
     {
       $result['job'.$i]= $this->job($key, $value['obra']);
       $i++;
     }

    $this->cache->save($result, 'get3'.$code);

} else {

    $result = $this->cache->load('get3'.$code);
}

return $result;
.......
What we can see is first we checked if the cache file exist, if not fetch the data from the database and write the cache file.
It´s very simple to use this component and the gains are amazing.
I can tell you, that the non cached version took 22.3969 seconds to process and the cached version took only 1.4055 secs. A tremendous speed gain, with a few lines of code.

Squid Reverse Proxy and SSL

In the last coupled of days, I've been writing an API. Nothing to complicated, just a few resources for some of my company customers grab info, directly into their system.

So I need SSL for securing the data, the main problem is that I use Squid as a reverse proxy to forward the URL for each server.

After fooling around I discover that the package that come with Ubuntu (10.04, I only use the LTS versions) does not came with SSL support out of the box, so we need to compile from the source.

I use version 2.7, but this instructions work with v3.

Verify it doesn’t have ssl support:

/usr/sbin/squid -v

You need to add some repositories in /etc/apt/sources.list

deb-src http://pt.archive.ubuntu.com/ubuntu lucid main restricted universe multiverse
deb-src http://pt.archive.ubuntu.com/ubuntu lucid-updates main restricted universe multiverse
deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse

Change the URL accordingly to your country.

First you need to install the following:

cd /usr/src
sudo apt-get install libssl-dev
sudo apt-get install devscripts build-essential fakeroot
sudo apt-get source squid
sudo apt-get build-dep squid
sudo apt-get install libcppunit-dev
sudo apt-get install libsasl2-dev
sudo apt-get install cdbs

You need to edit the rules file

cd squid-2.7.STABLE7/debian
pico rules

Now look for the line that contains: # Configure the package.

Before the last line add: --enable-ssl \

Save and exit.

debuild -us -uc

dpkg -i squid_2.7.STABLE7-1ubuntu12.3_i386.deb squid-cgi_2.7.STABLE7-1ubuntu12.3_i386.deb squid-common_2.7.STABLE7-1ubuntu12.3_all.deb

Change the version of Squid accordingly to yours.

You should have Squid with SSL support.

Before continue just a reminder. You should have Squid installed in a different machine than your webserver, because if you don´t, you have to change the ports that Apache or other webserver listen to. If you don´t do this Squid listen to 3128 and 3129 instead of 80 and 443, and don´t redirect correctly.

Any old machine is suffice, I can tell you that my is a Pentium II, and works like a charm.

Now you have 2 options:

1 - Create a self signed certificate

2 - Buy one.

I choose the second because I find that Commodo have a one that cost only 12€. It´s call Positive SSL and you can find it here.

If you choose the first one find instructions for creating a self-signed certificate here.

Now let´s configure Squid to accept SSL connections.

First backup the current configuration:

sudo cp /etc/squid/squid.conf /etc/squid/squid.conf.bak

Again you have 2 options here:

1 - configure globally to accept any SSL connection

2 - configure for each redirection.

If you choose the second one, you must have one certificate per sub-domain, or buy one that includes also sub-domains. (They are expensive)

I did choose the first and did a litle trick for my API. More on that later.

So edit the config file:

pico /etc/squid/squid.conf

Now add at the top of the file:

https_port 443 cert=/etc/squid/ssl-cert/my_cert.crt key=/etc/squid/ssl-cert/my_key.key defaultsite=www.mycompany.pt vhost

Of course you need to change that line to your configuration.

So that line says that every connection that came via https, port 443, Squid will load that particulate certificate. You don´t have to install it on your webserver because it´s Squid that handle that. Quite nice indeed.

So now you have to write the redirect rules. They are quite simple:

cache_peer 192.168.0.1 parent 80 0 no-query originserver name=server_1
cache_peer_domain server_1 www.mycompany.pt
acl sites_server_1 dstdomain www.mycompany.pt
http_access allow sites_server_1

What this means.

You are telling Squid that every request that comes to www.mycompany.pt it redirects to the server 192.168.0.1, it works for http or https.

Now I have a subdomain api.mycompany.pt

The rules are the same:

cache_peer 192.168.0.1 parent 80 0 no-query originserver name=server_2
cache_peer_domain server_2 api.mycompany.pt
acl sites_server_2 dstdomain api.mycompany.pt
http_access allow sites_server_2

The request goes to same IP, because that server is configure with several vhosts.

So now we have a problem. The certificate is only valid for the www.mycompany.pt.

Let´s do a little trick. The API is REST based so it receives several requests.

I'm going to use a GET request as an example:

The URL is:

http://api.mycompany.pt/api/rest?method=test&key=123456&testsearch=myparticularsearch

So if we use https it will throw a certificate authentication error. So my solution is to use a link. I create a new directory in the dir where the www.mycompany.pt resides.

cd /var/www/mycompany
mkdir secure

cd secure
ln -s api /var/www/api/deploy/public/api/

Now if we use https://www.mycompany.pt/secure/api/rest?method=test&key=123456&testsearch=myparticularsearch the certiciate works fine.

I can have the site for the API, with the documentation and so on, and a secure link for the data.

It´s not the perfect solution, but it works just fine.

I will have a problem when the new site for www.mycompany.pt is finished. Because it will be using ZF - MVC. And secure will be a module name.

I´ll deal with that later......