Pages

Sunday, July 22, 2012

lynx Webmaster Tips


Hi Friends,

Today we are going to play with lynx (Command line browser) in Linux. 

Fun in the Terminal With Lynx Browser


Get the text from a Web page as well as a list of links

lynx -dump "http://www.example.com/"

Get the source code from a Web page with Lynx

lynx -source "http://www.example.com/"

Get the response headers with Lynx

lynx -dump -head "http://www.example.com/"

The GNU/Linux command line gives you a lot of small tools that can be connected with each other by piping the output of one tool into another tool.
For example, you might see a page with a lot of links on it that you want to examine more closely. You could open up a terminal and type something like the following:
$ lynx -dump "http://www.example.com" | grep -o "http:.*" >file.txt
That will give you a list of outgoing links on the web page at http://www.example.com, nicely printed to a file called file.txt in your current directory.

Here's how it works:


Lynx is a Web browser that only reads text. This makes it great for extracting text from web pages. The option -dump tells Lynx to grab the web page and display it in the terminal. That is followed by the URL you want to visit. So lynx -dump "http://www.example.com" is just saying, "Lynx, dump the output of http://www.example.com to the screen".

You can try the first part by itself to see what it does, replacing http://www.example.com with another URL of your choice. In the following example I've used the home page of the google.
$lynx -dump  "http://www.google.com/"
You will see output something as below:

   6. https://mail.google.com/mail/?tab=wm
   7. http://www.google.com/intl/en/options/
   8. http://www.google.com/url?sa=p&pref=ig&pval=3&q=http://www.google.com/ig%3Fhl%3Den%26source%3Diglk&usg=AFQjCNFA18XPfgb7dKnXfKz7x7g1GDH1tg
   9. http://www.google.com/history/optout?hl=en
  10. http://www.google.com/preferences?hl=en
  11. https://accounts.google.com/ServiceLogin?hl=en&continue=http://www.google.com/
  12. http://www.google.com/advanced_search?hl=en
  13. http://www.google.com/language_tools?hl=en
  14. http://www.google.com/intl/en/ads/
  15. http://www.google.com/services/
  16. https://plus.google.com/116899029375914044550
  17. http://www.google.com/intl/en/about.html

Extracting the Links from Lynx

Now we can look at the next part of the URL extraction process:
$ lynx -dump "http://www.example.com" | grep -o "http:.*" >file.txt
When you use a pipe (the | symbol), it tells the computer to take the output from the first tool and send it to the following tool. So we are taking the output of Lynx and sending it to gr
e
p

G
rep is a tool to search for text and display each line that contains a matching pattern. The option, -o tells grep to only return the matching part of the line and not the entire line. We are searching for anything that matches "http:.*", which is a simple regular expression.
A regular expression is a pattern that is made up of symbols that tell the computer what to look for in order to make a match. We want to find anything that matches the pattern: http: [and anything that comes after that]. A period (.) in a regular expression symbolizes one character of any type. The asterisk (*) symbolizes zero or more of the preceeding character. So "http.*" means "match 'http' and any number of characters that follow it". This will extract only the URLs from Lynx's output.
We could stop there and just run it as this, which will send the output to the screen:
$ lynx -dump "http://www.example.com" | grep -o "http:.*"
But it would be nice to save the output for later. To save the output to a file, just add the > symbol. In this case the output is being directed to a file named file.txt as shown below.
$ lynx -dump "http://www.example.com" | grep -o "http:.*" >file.txt

Other Options

Here is an example of some other options that you can add. The command sort sorts the results, anuniq removes any duplicate entries.
$ lynx -dump "http://www.example.com" | grep -o "http:.*" | sort | uniq >file.txt


Thanks

Saturday, July 21, 2012

configure a system to use two different networks


Question: 

How we can configure a system to use two different networks.

Requirements:




  • Red Hat Enterprise Linux or *nix OS
  • A system with two Network Interface Cards (NICs)
  • Two different networks


  • Solution:




    • Edit the file  /etc/sysconfig/network and fill in the values of the variables:
    NETWORKING=yes
    HOSTNAME=myhost.example.com
    Note: If necessary, remove or comment out the variable GATEWAY. Each NIC will have it's own gateway.
    • Edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 and add the following variables with their corresponding values:
    DEVICE=eth0
    BOOTPROTO=none
    ONBOOT=yes
    NM_CONTROLLED=no
    USERCTL=no
    IPADDR=
    NETMASK=
    GATEWAY=
    PEERDNS=yes
    DNS1=
    DNS2=
    
    • Do the same for the file /etc/sysconfig/network-scripts/ifcfg-eth1:
    DEVICE=eth1
    BOOTPROTO=none
    ONBOOT=yes
    NM_CONTROLLED=no
    USERCTL=no
    IPADDR=
    NETMASK=
    GATEWAY=
    PEERDNS=yes
    DNS1=
    DNS2=
    
    The values between '<' and '>' are dependent on your network.
    The variables PEERDNSDNS1 and DNS2 are optional. If you have the same DNS servers for both networks, you should put the nameservers IPs in the file /etc/resolv.conf, and in both ifcfg-eth?files remove the variables DNS1 and DNS2 and setPEERDNS=no.
    • Restart the network service:
    # service network restart

    Thanks!