open source byte
Monday, January 16, 2012
MaintainJ Reverse Engineering Tool
MaintainJ reduces the time needed to understand complex enterprise Java applications from days to minutes. It also helps to document the application's runtime behaviour using sequence and class diagrams.
MaintainJ generates runtime sequence diagrams using the call trace captured for a single use case while you run the application. The captured information includes data at each method call, any sql calls to the database and the response time of each call. You can trace applications running on a single JVM or on multiple JVM's and view the end-to-end call trace in a single sequence diagram.
Supported applications are :
J2SE applications/Applets/JUnit
J2EE applications : Tomcat 4, 5, 6 & 7, JBoss 3, 4 & 5, WebLogic 8, 9 & 10, WebSphere 5, 6 & 7. Glassfish and Jetty servers are also supported
Databases : Oracle, DB2, MySQL, PostgreSQL and Derby out of the box. Microsoft SQL Server also is supported
MaintainJ UML Diagram Editor renders sequence and class diagrams from the call trace files.
MaintainJ is priced from $150 per user. Very reasonably priced for time saved in understanding java applications.
ONE
Reverse Engineering Tool JavaCallTracer
Javacalltracer is a reverse engineering tool for Java/J2EE programs.
Can be used in a situation where you need to analyze a java program that was developed by someone else and there is not much documentation available about the program. Normally the way people work in such a situation is that they study the source code to understand its working. But this is a very time consuming and error prone process and generally the output is a sequence diagram. This tool takes that manual work and automates it, which saves a lot of time and effort.
This tool can also be used to do design validation after the coding phase of a project. So once all the code has been written I as an architect can just run the program and generate all the sequence diagrams from the working program and compare them to my design sequence diagram to check if the design was properly followed.
This tool is actually a combination of 2 tools
1] Calltracer
2] Calltrace2Seq
The Calltracer tool attaches to the JVM of your java program (while it is running) and records the call trace. This can then be printed out in XML or Text formats.
Once you have used the Calltracer tool to generate a XML output you can then use the Calltrace2Seq tool (which is a simple java program) to convert it to a UML sequence diagram. More details are available at "Using the Calltrace2Seq tool".
The sequence diagram will be generated in SVG format. SVG can be opened in browser.
Incase your java program has multiple threads, you will find that the generated output has multiple Thread elements. Each thread element has the call trace coressponding to a thread of the program.
Can be used in a situation where you need to analyze a java program that was developed by someone else and there is not much documentation available about the program. Normally the way people work in such a situation is that they study the source code to understand its working. But this is a very time consuming and error prone process and generally the output is a sequence diagram. This tool takes that manual work and automates it, which saves a lot of time and effort.
This tool can also be used to do design validation after the coding phase of a project. So once all the code has been written I as an architect can just run the program and generate all the sequence diagrams from the working program and compare them to my design sequence diagram to check if the design was properly followed.
This tool is actually a combination of 2 tools
1] Calltracer
2] Calltrace2Seq
The Calltracer tool attaches to the JVM of your java program (while it is running) and records the call trace. This can then be printed out in XML or Text formats.
Once you have used the Calltracer tool to generate a XML output you can then use the Calltrace2Seq tool (which is a simple java program) to convert it to a UML sequence diagram. More details are available at "Using the Calltrace2Seq tool".
The sequence diagram will be generated in SVG format. SVG can be opened in browser.
Incase your java program has multiple threads, you will find that the generated output has multiple Thread elements. Each thread element has the call trace coressponding to a thread of the program.
Sunday, January 15, 2012
Top NoSQL databases
HBase is an open-source, nonrelational, distributed database that is modeled after Google's BigTable and written in Java. It was developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of Hadoop Distributed Filesystem (HDFS).
Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google's MapReduce and Google File System papers. Hadoop is a top-level Apache project being built by a global community of contributors, written in Java. Yahoo is the largest contributor to the project, and uses Hadoop extensively across its businesses.
MongoDB is an open-source, high-performance, schema-free, document-oriented NoSQL database system written in C++. It manages collections of BSON documents that can be nested in complex hierarchies and still be easy to query and index, enabling many applications to store data in a natural way that matches their native data types and structures. 10gen began developing MongoDB in October 2007 by 10gen. The first public release was in February 2009.
Apache CouchDB, commonly referred to as CouchDB, is an open-source document-oriented database written mostly in the Erlang programming language. It is part of the NoSQL group of data stores and is designed for local replication and to scale horizontally across a wide range of devices. CouchDB is supported by commercial enterprises Couchbase and Cloudant.
Apache Cassandra is an open source distributed database management system. It is an Apache Software Foundation top-level project[1] designed to handle very large amounts of data spread out across many commodity servers while providing a highly available service with no single point of failure. It is a NoSQL solution that was initially developed by Facebook and powered their Inbox Search feature until late 2010. Jeff Hammerbacher, who led the Facebook Data team at the time, has described Cassandra as a BigTable data model running on an Amazon Dynamo-like infrastructure.
Cassandra provides a structured key-value store with tunable consistency. Keys map to multiple values, which are grouped into column families. The column families are fixed when a Cassandra database is created, but columns can be added to a family at any time. Furthermore, columns are added only to specified keys, so different keys can have different numbers of columns in any given family. The values from a column family for each key are stored together. This makes Cassandra a hybrid data management system between a column-oriented DBMS and a row-oriented store. Also, besides using the way of modeling of BigTable, it has properties like eventual consistency, the Gossip protocol, a master-master way of serving the read and write requests that are inspired by Amazon's Dynamo.
Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google's MapReduce and Google File System papers. Hadoop is a top-level Apache project being built by a global community of contributors, written in Java. Yahoo is the largest contributor to the project, and uses Hadoop extensively across its businesses.
MongoDB is an open-source, high-performance, schema-free, document-oriented NoSQL database system written in C++. It manages collections of BSON documents that can be nested in complex hierarchies and still be easy to query and index, enabling many applications to store data in a natural way that matches their native data types and structures. 10gen began developing MongoDB in October 2007 by 10gen. The first public release was in February 2009.
Apache CouchDB, commonly referred to as CouchDB, is an open-source document-oriented database written mostly in the Erlang programming language. It is part of the NoSQL group of data stores and is designed for local replication and to scale horizontally across a wide range of devices. CouchDB is supported by commercial enterprises Couchbase and Cloudant.
Apache Cassandra is an open source distributed database management system. It is an Apache Software Foundation top-level project[1] designed to handle very large amounts of data spread out across many commodity servers while providing a highly available service with no single point of failure. It is a NoSQL solution that was initially developed by Facebook and powered their Inbox Search feature until late 2010. Jeff Hammerbacher, who led the Facebook Data team at the time, has described Cassandra as a BigTable data model running on an Amazon Dynamo-like infrastructure.
Cassandra provides a structured key-value store with tunable consistency. Keys map to multiple values, which are grouped into column families. The column families are fixed when a Cassandra database is created, but columns can be added to a family at any time. Furthermore, columns are added only to specified keys, so different keys can have different numbers of columns in any given family. The values from a column family for each key are stored together. This makes Cassandra a hybrid data management system between a column-oriented DBMS and a row-oriented store. Also, besides using the way of modeling of BigTable, it has properties like eventual consistency, the Gossip protocol, a master-master way of serving the read and write requests that are inspired by Amazon's Dynamo.
Server Side Javascript with Node.js
Node.js is all the buzz at the moment, and makes creating high performance, real-time web applications easy. It allows JavaScript to be used end to end, both on the server and on the client. This tutorial will walk you through the installation of Node and your first “Hello World” program, to building a scalable streaming Twitter server.
JavaScript has traditionally only run in the web browser, but recently there has been considerable interest in bringing it to the server side as well, thanks to the CommonJS project. Other server-side JavaScript environments include Jaxer and Narwhal. However, Node.js is a bit different from these solutions, because it is event-based rather than thread based. Web servers like Apache that are used to serve PHP and other CGI scripts are thread based because they spawn a system thread for every incoming request. While this is fine for many applications, the thread based model does not scale well with many long-lived connections like you would need in order to serve real-time applications like Friendfeed or Google Wave.
“Every I/O operation in Node.js is asynchronous…”
Node.js, uses an event loop instead of threads, and is able to scale to millions of concurrent connections. It takes advantage of the fact that servers spend most of their time waiting for I/O operations, like reading a file from a hard drive, accessing an external web service or waiting for a file to finish being uploaded, because these operations are much slower than in memory operations. Every I/O operation in Node.js is asynchronous, meaning that the server can continue to process incoming requests while the I/O operation is taking place. JavaScript is extremely well suited to event-based programming because it has anonymous functions and closures which make defining inline callbacks a cinch, and JavaScript developers already know how to program in this way. This event-based model makes Node.js very fast, and makes scaling real-time applications very easy.
JavaScript has traditionally only run in the web browser, but recently there has been considerable interest in bringing it to the server side as well, thanks to the CommonJS project. Other server-side JavaScript environments include Jaxer and Narwhal. However, Node.js is a bit different from these solutions, because it is event-based rather than thread based. Web servers like Apache that are used to serve PHP and other CGI scripts are thread based because they spawn a system thread for every incoming request. While this is fine for many applications, the thread based model does not scale well with many long-lived connections like you would need in order to serve real-time applications like Friendfeed or Google Wave.
“Every I/O operation in Node.js is asynchronous…”
Node.js, uses an event loop instead of threads, and is able to scale to millions of concurrent connections. It takes advantage of the fact that servers spend most of their time waiting for I/O operations, like reading a file from a hard drive, accessing an external web service or waiting for a file to finish being uploaded, because these operations are much slower than in memory operations. Every I/O operation in Node.js is asynchronous, meaning that the server can continue to process incoming requests while the I/O operation is taking place. JavaScript is extremely well suited to event-based programming because it has anonymous functions and closures which make defining inline callbacks a cinch, and JavaScript developers already know how to program in this way. This event-based model makes Node.js very fast, and makes scaling real-time applications very easy.
Saturday, January 14, 2012
Server Side Spam Filtering with Postfix Spamassasin and Maildrop
Server Side Filtering spam in Postfix to move all spam tagged mails to a user spam / junk folder is pretty simple.
This solution helps in moving all ***** SPAM ***** subject tagged by spamassasin to be moved to junk folder there by the spam tagged mails doesnt appear in INBOX.
Install Maildrop and configure as given below to automatically move those files to a Junk folder.
Steps to set this with Postfix and Spamassassin :-
First, setup your /etc/maildroprc file:
# commands and variables for making the mail directories
maildirmake=/usr/bin/maildirmake
mkdir=/bin/mkdir
rmdir=/bin/rmdir
MAILDIR=$DEFAULT
# make the user's mail directory if it doesn't exist
`test -e $MAILDIR`
if ($RETURNCODE != 0)
{
`$mkdir -p $MAILDIR`
`$rmdir $MAILDIR`
`$maildirmake $MAILDIR`
}
# make the .Junk folder if it doesn't exist
JUNK_FOLDER=.Junk
_JUNK_DEST=$MAILDIR/$JUNK_FOLDER/
`test -d $_JUNK_DEST`
if ($RETURNCODE != 0 )
{
`$maildirmake $_JUNK_DEST`
#auto subscribe. the following works for courier-imap
`echo INBOX.Junk >> $MAILDIR/courierimapsubscribed`
}
# If the Spam-Flag is set, move the mail to the Junk folder
if (/^X-Spam-Flag:.*YES/)
{
exception {
to $DEFAULT/.Junk/
}
}
The comments clearly state what’s going on there.
Once that’s setup, check /etc/postfix/master.cf and make sure the
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
is not commented out.
Next set the /usr/bin/maildrop file setuid root. This is so maildrop can interact with authdaemon and the mail folders.
#chmod +s /usr/bin/maildrop
Then add this to /etc/postfix/main.cf file:
virtual_transport = maildrop
maildrop_destination_recipient_limit = 1
If there is another virtual_transport line, be sure to comment that out first.
Last, set the permissions on the authdaemon so that maildrop can access it.
chown vmail /var/run/courier/authdaemon
Thursday, January 12, 2012
Services, Application, Server, Cloud & Network Monitoring
Hyperic-hq is application monitoring and performance management for virtual, physical, and cloud infrastructures. Auto-discover resources of 75+ technologies, including vSphere, and collect availability, performance, utilization, and throughput metrics.
Features
* vSphere auto-discovery of all components of virtualized applications
* Automatically discovers, monitors, and manages software and network resources
* Monitors apps on any platform, including Unix, Linux, Windows, Solaris, AIX, HPUX, VMware, and Amazon Web Services
* Built-in support for 75 common components—including databases, application servers, middleware, web servers, network devices and more
* Optimized for virtual environments with integration with vCenter and vSphere
Technologies Managed by Hyperic
* Operating Systems
o AIX Monitoring
o HP/UX Monitoring
o Linux Monitoring
o Solaris Monitoring
o Windows Monitoring
o Mac OSX Monitoring
o FreeBSD Monitoring
* Web Servers
o Apache Monitoring
o IIS Monitoring
o Sun Java System Monitoring
* Application Servers
o WebLogic Monitoring
o WebSphere Monitoring
o JBoss Monitoring
o Apache Geronimo
o ColdFusion Monitoring
o JRun Monitoring
o .Net Runtime Monitoring
o Tomcat Monitoring
o Glassfish Monitoring
o Resin Monitoring
* Databases
o DB2 Monitoring
o SQL Server Monitoring
o MySQL Monitoring
o Oracle Monitoring
o PostgreSQL Monitoring
o Sybase Monitoring
* Messaging Middleware
o ActiveMQ Monitoring
o IBM MQ Monitoring
* Microsoft Technology
o MS Exchange Monitoring
o Microsoft Active Directory Monitoring
o Microsoft .Net Monitoring
* Virtualization
o VMware Monitoring
o XenServer Monitoring
* Mail Servers
o Postfix Monitoring
o Sendmail Monitoring
o Zimbra Monitoring
* Network Management
o Alfresco Monitoring
o Bind Monitoring
o MemCached Monitoring
o Network Device Monitoring
o Network Services Monitoring
o Nagios Monitoring
o NTP Monitoring
o ZXTM Monitoring
o Custom Monitoring
* Application Management
o JMX Monitoring
* Distributed Platforms
o Bind Monitoring
o NTP Monitoring
* Application Platforms
o LAMP Monitoring
o LAM-J Monitoring
o J2EE Monitoring
* Integrated Applications
o ColdFusion Monitoring
o Alfresco Monitoring
Features
* vSphere auto-discovery of all components of virtualized applications
* Automatically discovers, monitors, and manages software and network resources
* Monitors apps on any platform, including Unix, Linux, Windows, Solaris, AIX, HPUX, VMware, and Amazon Web Services
* Built-in support for 75 common components—including databases, application servers, middleware, web servers, network devices and more
* Optimized for virtual environments with integration with vCenter and vSphere
Technologies Managed by Hyperic
* Operating Systems
o AIX Monitoring
o HP/UX Monitoring
o Linux Monitoring
o Solaris Monitoring
o Windows Monitoring
o Mac OSX Monitoring
o FreeBSD Monitoring
* Web Servers
o Apache Monitoring
o IIS Monitoring
o Sun Java System Monitoring
* Application Servers
o WebLogic Monitoring
o WebSphere Monitoring
o JBoss Monitoring
o Apache Geronimo
o ColdFusion Monitoring
o JRun Monitoring
o .Net Runtime Monitoring
o Tomcat Monitoring
o Glassfish Monitoring
o Resin Monitoring
* Databases
o DB2 Monitoring
o SQL Server Monitoring
o MySQL Monitoring
o Oracle Monitoring
o PostgreSQL Monitoring
o Sybase Monitoring
* Messaging Middleware
o ActiveMQ Monitoring
o IBM MQ Monitoring
* Microsoft Technology
o MS Exchange Monitoring
o Microsoft Active Directory Monitoring
o Microsoft .Net Monitoring
* Virtualization
o VMware Monitoring
o XenServer Monitoring
* Mail Servers
o Postfix Monitoring
o Sendmail Monitoring
o Zimbra Monitoring
* Network Management
o Alfresco Monitoring
o Bind Monitoring
o MemCached Monitoring
o Network Device Monitoring
o Network Services Monitoring
o Nagios Monitoring
o NTP Monitoring
o ZXTM Monitoring
o Custom Monitoring
* Application Management
o JMX Monitoring
* Distributed Platforms
o Bind Monitoring
o NTP Monitoring
* Application Platforms
o LAMP Monitoring
o LAM-J Monitoring
o J2EE Monitoring
* Integrated Applications
o ColdFusion Monitoring
o Alfresco Monitoring
Monday, January 9, 2012
Zimbra SSL Certificate expiry downs LDAP
zimbra shows error when i tried to start.
Unable to determine enabled services from ldap.
Unable to determine enabled services. Cache is out of date or doesn't exist.
The reason was i installed SSL with a validity of 365 year ago and forgot to regenerate.
LDAP doesnt start with a expired SSL.
Follow steps below and generate SSL for 3650 days (ten long years)
## Regenerate SSL certificate and deploy as given below
su - zimbra -c 'zmcontrol stop'
rm -rf /opt/zimbra/ssl/*
rm -rf /opt/zimbra/ssl/.rnd
/opt/zimbra/java/bin/keytool -delete -alias my_ca -keystore /opt/zimbra/java/jre/lib/security/cacerts -storepass changeit
/opt/zimbra/java/bin/keytool -delete -alias jetty -keystore /opt/zimbra/mailboxd/etc/keystore -storepass `su - zimbra -c 'zmlocalconfig -s -m nokey mailboxd_keystore_password'`
vi /opt/zimbra/bin/zmcertmgr
# Find line
# SUBJECT="/C=US/ST=N\/A/L=N\/A/O=Zimbra Collaboration Suite/OU=Zimbra Collaboration Suite/CN=${zimbra_server_hostname}"
# and change to your company name - may not be required if host name and is properply configured in etc/hosts
# then find and change you want value days expire cert validation_days=365 to validation_days=3650
# save /opt/zimbra/bin/zmcertmgr
/opt/zimbra/bin/zmcertmgr createca -new
/opt/zimbra/bin/zmcertmgr deployca -localonly
/opt/zimbra/bin/zmcertmgr createcrt self -new
/opt/zimbra/bin/zmcertmgr deploycrt self
su - zimbra -c 'zmcontrol start'
/opt/zimbra/bin/zmcertmgr deploycrt self
/opt/zimbra/bin/zmcertmgr deployca
su - zimbra -c 'zmupdateauthkeys'
/opt/zimbra/bin/zmcertmgr viewdeployedcrt
Unable to determine enabled services from ldap.
Unable to determine enabled services. Cache is out of date or doesn't exist.
The reason was i installed SSL with a validity of 365 year ago and forgot to regenerate.
LDAP doesnt start with a expired SSL.
Follow steps below and generate SSL for 3650 days (ten long years)
## Regenerate SSL certificate and deploy as given below
su - zimbra -c 'zmcontrol stop'
rm -rf /opt/zimbra/ssl/*
rm -rf /opt/zimbra/ssl/.rnd
/opt/zimbra/java/bin/keytool -delete -alias my_ca -keystore /opt/zimbra/java/jre/lib/security/cacerts -storepass changeit
/opt/zimbra/java/bin/keytool -delete -alias jetty -keystore /opt/zimbra/mailboxd/etc/keystore -storepass `su - zimbra -c 'zmlocalconfig -s -m nokey mailboxd_keystore_password'`
vi /opt/zimbra/bin/zmcertmgr
# Find line
# SUBJECT="/C=US/ST=N\/A/L=N\/A/O=Zimbra Collaboration Suite/OU=Zimbra Collaboration Suite/CN=${zimbra_server_hostname}"
# and change to your company name - may not be required if host name and is properply configured in etc/hosts
# then find and change you want value days expire cert validation_days=365 to validation_days=3650
# save /opt/zimbra/bin/zmcertmgr
/opt/zimbra/bin/zmcertmgr createca -new
/opt/zimbra/bin/zmcertmgr deployca -localonly
/opt/zimbra/bin/zmcertmgr createcrt self -new
/opt/zimbra/bin/zmcertmgr deploycrt self
su - zimbra -c 'zmcontrol start'
/opt/zimbra/bin/zmcertmgr deploycrt self
/opt/zimbra/bin/zmcertmgr deployca
su - zimbra -c 'zmupdateauthkeys'
/opt/zimbra/bin/zmcertmgr viewdeployedcrt
Subscribe to:
Posts (Atom)