Run-levels in Ubuntu 16.04 LTS

Switching between different run-levels (targets) in Ubuntu 16.04 LTS

Ubuntu 16.04 has moved from using init to systemd. Thus, the  concept of run-levels is replaced by the term targets.  The advantages over choosing systemd is discussed in the article The Story Behind ‘init’ and ‘systemd’: Why ‘init’ Needed to be Replaced with ‘systemd’ in Linux . The seven run-levels of init can be mapped with the targets as:

 

Run-levels Targets
0 poweroff.target
1 rescue.target
2,3,4 multi-user.target
5 graphical.target
6 reboot.target

To change the run-level non-GUI:

To set this runlevel as default for every time the system restarts

To check the current run-level

or

Similarly, one can set to any target by,

 

Manas Chowki Eco-camp

An overwhelming camping experience at Manas Chowki Eco-camp.

New Year’s eve was approaching which started unrest in back of my mind. Eagerly waiting like a dog with a dangling tongue for somebody to throw a plan for new year celebration. Fortunately, one of my senior presented a plan to go for a camping trip at Manas Chowki Eco-camp, without any second thought I just embraced it. Feeling relieved!

Along with me and my senior, other six members gave their consent. We talked to a NGO and confirmed our plan with them for 30th and 31th of December. Feeling very exicted to have my first camping experience.

It was a wonderful sunny day on 30th, we packed our bags with the bare minimum clothes to withstand the cold and left IIT Guwahati at 12 PM. With instruction from the people of NGO we waited in Joyguru (NH27), near to most popular Gobindha Dhaba. The NGO had informed us of a public transport bus (ASTC) which they had talked to pick us. We boarded the bus at 1:30 PM, it took around two and a half hour to reach the NGO office. The people of the NGO welcomed us and indroduced themselves. The camp site (Chowki Picnic Spot) was 5km away from the NGO office. The route to it was one way, so they made us wait in their office garden till the convoy of returning picnic ends. They dicussed with us about their plan and arrangements. The Head of the NGO (Mr. Satan Ramsiyari) along with one member Mr. Durlav Choudhory will be among us in our stay to guide us. I was delighted to know, the next day they will take us for a trek across river and hills to the village Khalasu in Bhutan (very excited to cross border).

route from iit guwahati to NGO office
Route from IIT Guwahati to NGO office
route from NGO office to camping location
Route from NGO office to camping location

 

 

 

 

 

 

 

 

 

Finally, at 7:30 PM we were taken to the camp-site in a Maruti Van. Some other NGO members followed us in a mini-truck to set up the camp-site and carried supplies for the night. After next fifteen minutes, we arrived at our camp-site. It was a very dark and breezy night. We were surrounded by hills and were between mighty river Pagladia and its canal. Except to the splashing sound made by the rivers, it was very quite. No buzzing streets and sound of honking horns.

Bonfire was put up. We sat around it and enjoyed its warmth. They set up place for cooking, fried local chicken and pork, served us with some salads. Woah! That was delicious. We sang, danced and talked with the natives around the fire till 1 AM. Dinners was served and in group of two we headed to our tents, bed was ready with ample blankets.

Installation of cuda-sdk and driver for Ubuntu 16.04

Go to Cuda Downloads and download the required version of cuda drivers deb package.

Verify you have a GPU on the system and it is being detected properly.

Verify that you are running the supported OS.

Verify the gcc version (refer System Requirements).

Verify that the system has linux kernel headers installed.

Update ld-conf for the runtime to automatically find your libraries.

Since,  nvidia drivers are going to install, you need to blacklist the nouveau driver so it don’t pop out when you will reboot.

Reboot into text mode, before running the deb package file. This is required because the GPU should be free from any engagement at the time of installation.

Install cuda-sdk and driver.

Verify the installation

Now you can revert back the settings for the GUI mode.

 

Connect to GITLAB through a SSH tunnel

This tutorial is  for a scenario  in which we cannot directly access a GITLAB server but can be accessed through another server.

Lan setup for SSH tunnelling
Figure 1 : LAN setup for SSH tunneling

First Create an account in the GITLAB Server.  For that you have to tunnel to the GITLAB server’s HTTP port. Open an terminal an type the following command:

Type the password for the username of the Server. Now,  you should be able to access the GITLAB’s web interface by opening http://localhost:9000 in your browser. In the REGISTER tab fill up the form to create an account.

Login in to your account. To push and pull from the GITLAB server through SSH, we are required to generate a SSH key in our system, then add the SSH public key in your GITLAB’s  account. The following command will let you generate the SSH keys, press the RETURN key to accept the default settings for key generation.

Now, copy the generated public key. To copy your public key to the clipboard, use xclip for Linux Systems:

Navigate to the ‘SSH Keys’ tab in you ‘Settings’, and paste the key in the Key Field. Add a title as the name of your PC (any  name to identify your PC).

Screenshot for SSH keys tab (Click to enlarge)

Create a repository in your GitLab account. After creation of the repository you can clone to your computer through SSH. For that you need to  create an ssh tunnel in port 22(SSH)

Now you can add your sources codes in the cloned repository. To test, add an README.md file inside the repository, commit and push to the Gitlab Server.

 

Using whoosh with web2py

This tutorial is about creating a search engine application using whoosh and web2py.

To create a search engine first we do require some documents. For this tutorial I have crawled some sample documents from the Reuters archive. The code below describes crawling and parsing the HTML document to extract the desired content for indexing. It is desired to analyze the structure of the HTML file for efficient extraction of important information.

Lets start the first phase for a search engine,

Crawling:

Firstly, we will create a browser class which will imitate a web browser(User Agent) and request the desired pages for crawling. For this we will use the urllib2 python library. Our browser class is configured as Mozilla Firefox.

The methods of the browser class are described below:

The request which is send by our crawler will be seen as a request by a Firefox browser. The browser class constructor takes care of this part, for that we set up cookie and added user agent description in the header. This description will be sent along with the HTTP request.

Next we have the get_html function which is called with a URL(to be crawled). It requests for the URL as a browser and stores the response.  Sometimes the server sends the page content in a compressed format, so its helpful to keep a check on it to decompress. Zlib is used for decompressing.

We are handling some of the commonly occurring exceptions while crawling,

  • URL ERROR : The handlers raise this exception (or derived exceptions) when they run into a problem. Like I/O Error.
  • SOCKET TIMEOUT ERROR: It handles timeout error.
  • HTTP ERROR : This is useful when handling exotic HTTP errors, such as requests for authentication.

The get_html function either returns the HTML content or a dictionary containing error information.

I am considering some URLs for crawling which are available in links.txt file.

Next, we have the function crawl which creates a browser instance. It reads the links.txt file line by line, crawls the URLs and stores them as HTML files. If errors occur for an URL, it  is logged into a log file(error.txt for our case).

HTML Parsing:

The above functions will take care for crawling the documents. Now the part for parsing follows.

For that let’s create a class StoryExtractor which will get instantiated with a document, and some methods to parse the DOM structure of the HTML. We are going to use a parser library BeautifulSoup to parse the documents. An instance of the beautiful soup creates a parse tree for the HTML document, with markup tags as node. Using BeautifulSoup, the desired content within the HTML tags can be found by searching for the tag or its attributes(class, id etc).  So, we are required to do a quick analysis of the HTML document and make note of the important tags and attributes.

The method of the StoryExtractor Class are described below:

It’s the constructor of the class which takes the HTML content an create a BeautifulSoup instance.

While analyzing the HTML document structure, we found that the content of the articles are available inside paragraph tags. So, the function get_story_content finds all the paragraph tags in a form of list, then we traverse the list and append the text to form the main content of the document. Here we see that we have only cut  out the main span where the content resides, specifying the attribute and its value.

Similarly the other function get_title and get_url are implemented .

Finally, the extract function that loops through all the crawled story and extracts the desired content using the StoryExtractor. The extracted title, content and URL are stored in dictionary object, and the dictionary object is serialized to a file using the python pickle library(i.e. storing the dictionary into a file).

So, this was the first part of the tutorial. You can find the code for this tutorial here.

and the whole code for the demo search engine in my Github repository.

Next part we will be focusing on indexing the document using whoosh(python library for text indexing).