Pages

Tuesday, August 9, 2016

Check dependencies of local RPM package

We all know that for managing packages or softwares on any system, we need some kind of tool which can be used to manage the packages or softwares. Different distributions have its know tool for achieving the same.

                     For example for RHEL/Centos/Fedora we use RPM(RPM Package Manager)
 for all rpm package management which take care of "installation, uninstallation, update, query etc.



              So, sometimes whenever we install any package we get lots of error regarding the dependencies. Here we are discussing that how we can list dependencies associated with particular rpm file.


1.) Check for file i,e, yet package is not installed:
         rpm -qpR {rpm-file}  
e.g. test
    -----
    -----
    output truncated..
  2.) If packages is already installed:
      rpm -qR {package-name}
e.g. :

   3.) Dry run without installing the package:
         rpm -Uvh --test {rpm-file}
  e.g.:

Finally, yes of course if you don't make these checks and just try to install then you will get list of missing dependencies as well.

Monday, April 25, 2016

Remount multiple NFS mount points on Client in one go

                                     Sometimes we may have number of mount points available on NFS clients and after making changes for any one of parameters, we have to remount all partitions. Doing umount and mount on multiple partitions will really be hectic job and there may be chances of human errors.
     
    We can do this using single command to achieve the same. Sharing some other commands as well, before moving to exact one :).

  • Get the list of all NFS mount points available on System:
                     Before moving ahead with changes, lets see how many partitions are their on system. Below command will do trick and give you all the nfs mount points without any header and headers.
    • Without headers -
                     #df -PF nfs | awk '{if(NR>1)print}'     # This command will suppress header line 


    • With headers-
                   #df -PF nfs 

  • Here goes the actual thing, where we need to mount and remount multiple nfs partitions after making changes to parameters.
                    #for M in $(mount | awk '/type nfs / {print $3;}'); do echo $M; sudo umount $M && sudo mount $M && echo "ok :)"; done


[Note: Execute at your own risk after doing testing on test environments ;) ]

Monday, March 21, 2016

"ERROR: Could not find cookbook in your cookbook path, skipping it" in Chef

Chef is a automation framework tool which help us to deploy code or configuration across multiple systems which may be physical, virtual or cloud systems.

Here I am just want to highlight one small issue which I got when I was trying to upload cookbook from my workstation. Everything was in place, but still it was throwing below error:

$ knife  cookbook upload cookbook_nameERROR: Could not find cookbook cookbook_name in your cookbook path, skipping it
ERROR: Failed to upload 1 cookbook.

Usually, by default knife will use default location specified in ~/.chef/knife.rb file for cookbook. In my everything was correctly configured as below :

$ cat ~/.chef/knife.rb | grep cookbook_path
cookbook_path [ '.', '..' ]
I was trying to upload as per directory name given to cookbook.


After, doing lots of search finally got to know that knife command will compare the cookbook name from metadata.rb file in cookbook directory. Then I made the correction in metadata.rb file and it works like charm as below:


Friday, October 2, 2015

Get Oracle Version details

In this small post, I am sharing the commands to get details about the oracle version you are using. Although much detail is there in Oracle documentation, but thought of sharing this small tips :).

Steps:

Connect to Oracle DB using CLI or UI tool as you wish. Here I am connecting with Oracel SQL Developer. There are number of ways for getting the details. I sharing below ones.

  1. select * from v$version;


  1.    2. select version from v$instance;

  2.             3. select * from product_component_version;*

  3. Regarding the release number format, there is very good explanation on Oracle Doc site. Please go through it for details.
  4. Thanks!!

Thursday, October 1, 2015

boot2docker Error

I have installed boot2docker and when I tried to play with docker, I have started getting below error for every docker commands that I run.

Error

FATA[0000] Get http:///var/run/docker.sock/v1.18/version: dial unix /var/run/docker.sock: An address incompatible with the requested protocol was used.. Are you trying to connect to a TLS-enabled daemon without TLS?



I have checked about the vm status and everything seems good, as below:


After some troubleshooting, I found that issue was with some system variables. Basically there are three variables, which you need to set to make this working. In windows you can use set command and in Linux you can use export command:

   set DOCKER_HOST=tcp://192.168.59.103:2376    
   set DOCKER_CERT_PATH=C:\Users\kuldeep.d.sharma\.boot2docker\certs\boot2docker-vm   
   set DOCKER_TLS_VERIFY=1

Note: When you initialize your boot2docker, it will provide you all details and ask to update variables. 
P.S.- Change above values accordingly :).

Below is the screen shot, where I tried to set each variable and saw the different result and dependencies of these variables on each other.




Thanks!!

Tuesday, August 4, 2015

Installing Jboss A-MQ 6.2

Jboss A-MQ 6.2 has been released on 2015-06-23 with lots of bug fixes along with major switch from Active MQ 5.9 to 5.11. Below are the steps for installing the new version and exploring the messaging world :) .



1.) Download JBoss A-MQ 6.2.0.GA Zip and md5 checksum to ensure the integrity.

2.) Compare the md5 checksum of zip file with downloaded one.

3.) If, both are same then unpack the archive. If you are unpacking the archive in windows, then make sure that you don't have any space and any special characters in name as %, $, # etc.

4.) Configure Users and Roles as per your requirements to $AMQ_HOME/etc/users.properties . Format will be as below.
# USER=PASSWORD,ROLE1,ROLE2,…

Note: I am using here simple admin password for demo, but in Live scenarios please choose strong password, as this password will be stored in plain text.  Jboss A-MQ 6.2 support RBAC(Role Based Access control), So we can assign different roles as per needed.

5.) Start AMQ instance.

6.) Login to console to view runtime information about container.

7.) Verify the Installations :
a.) Send message using below command, by default it will send 1000 messages to TEST queue.
#./bin/client "activemq:producer --user Username --password Password"
b.)  Check status about the messages:
#./bin/client "activemq:dstat"
c.) Run consumer client to consume the messages from TEST Queue as below :
#./bin/client "activemq:consumer --user Username --password Password"
d.) Again verify the Queue status by running following command:
#./bin/client "activemq:dstat"

You can explore all these information and even much more using hawtio console. It provide really impressive pictorial view for digging further and monitor the things using JMX mbeans.

Below are few screen shot from hawtio console:
ActiveMQ Tab :

Dashboard Tab:

JMX Tab:


Friday, July 31, 2015

Chef Server Overview

Currently we are leaving in digital world and IT Infrastructure is increasing day by day. In such situation it becomes difficult to manage number of servers, especially when we need to install or configure same thing on multiple systems.

                         
    So, chef is going to make your task easy to manage whole infrastructure without much effort. Chef is used as automation framework with which we can install/deploy servers and applications to any VM, cloud and physical servers. 

There were terms which we are going to use more often as we proceed:

  1. chef-client  :  This is the tool which is being installed on all the system managed by chef. It'll perform all task specified by runlist and also fetch any updated contents if any from chef-server.
  2. Workstation : Workstation are the nodes/system configure to author changes and push them to server. We can also bootstrap new nodes and apply changes to those from workstation.
  3. chef-server : The main server(hub/store) :), used to centralize and store all information at one place. Everything i.e. cookbooks, roles and policy setting will be uploaded to chef server from workstation. We also have user friendly Chef management console from where we can manage data bags, attributes, runlists, roles etc.
  4. Nodes : Node is any system where we have to install or configure anything. A node may be physical, virtual or cloud system. These are being configured by chef-client, so we should have chef-client installed on each node need to be managed by chef-server.
  5. Cookbooks : A cookbook is the mail part of whole configuration. It defines the scenarios and contains all information and configuration that needed to support that scenario:
    1. Recipes: which specifies the things(resources)  we can use and execution steps for those resources.
    2. Attribute: special values which can be referred in recipes.
    3. Files : Some static files/data which is needed as it is.
    4. Template: to store dynamic or common data with some changes.
  6. knife : knife is the mail tool used for interaction between local repo on workstation and the chef-server. We push data to chef-server using knife, then it can be used by different number of nodes managed by chef-server. Below are things knife help us to manage:
    1. Nodes
    2. Cookbooks and Recipes
    3. Roles
    4. Envrionments
    5. Data bags 
  7. Bookshelf: This component of chef server is used to store and manage cookbook data - templates, files with version. All cookbook contents will be stored by doing checksum of contents. So, if two different version of cookbook or different cookbook have same file or template, then bookshelf will store that file/template only once.
  8. Message Queues : Search Index fetch messages with the help of below components:
    1. RabbitMQ : used as messaging server for chef-server. All item in search indexes are first added the queue on RabbitMQ messaging server.
    2. chef-expander: used for fetching messages from RabbitMQ queue, then process to required format and forward them to chef-solr for indexing.
    3. chef-solr: contains Apache-solr and then expose its REST api for doing indexing and search.
  9. Nginx : Open Source HTTP Web and Reverse Proxy Server used as front-end load balancer. Nginx serve all requests coming to chef-server.
  10. PostgreSQL : used for storing all repository data for chef-server.



In next article, we will try to create setup with chef server, one node and one workstation.