Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Friday, 18 October 2013

Finding the CWD (current working directory) of a running process

Luckily, due to the UNIX philosophy of "everything" is a file, makes it rather trivial to find what the current working directory is. You just need to look at the symbolic "cwd" link under the process directory:

# ls -al /proc/[process number here]/cwd lrwxrwxrwx 1 build build 0 Oct 18 12:29 /proc/2506/cwd /root/run

Saturday, 11 August 2012

How to see which package a file belongs to

Previously I wrote a post talking about how to see what files were installed as a part of a package. In this post, I'll talk about going to other way i.e. how to tell which package a file belongs to

On Debian/Ubuntu you can do so using the dpkg search function:

# sudo dpkg -S /etc/init.d/whoopsie
whoopsie: /etc/init.d/whoopsie


On RedHat/CentOS you can use the rpm command:

# rpm -qf /usr/bin/bash

Configuring boot services

One of the common tasks when setting up a server is to configure whether a service is set to start up on boot or not. This is handled differently on different versions of Linux.

To list all services and whether they're set to run on boot:

RHEL/CentOS/Fedora

chkconfig --list

Debian/Ubuntu

rcconf



NOTE: this program doesn't come installed by default

Enable a service to run on boot:

RHEL/CentOS/Fedora

chkconfig [service name] on

Debian/Ubuntu

update-rc.d [service name] enable

Disable a service from running on boot

RHEL/CentOS/Fedora

chkconfig [service name] off

Debian/Ubuntu

update-rc.d [service name] disable


Tuesday, 3 July 2012

Linux Kernel Swappiness

There is a tendency of the Linux kernel to use memory as file system cache. This generally improves performance and is considered to be a "good thing". However, one thing that the kernel also occasionally does is take the memory allocated to running processes and swap them to disk, in order to use that memory for file systems cache. Now, this can and does result in processes becoming slower, especially if they've been running (sitting in memory) for a while, but haven't been actively used.

Luckily, there is a way in which you can define this behaviour and it's called the kernel "swappiness" value. The value has a range of 0 to 100, with zero roughly meaning that process memory will never get swapped out for the sake of disk caching and a value of 100 means that process memory is very aggressively swapped out, in favour of disk caching. A more in depth explanation of how the kernel manages swappiness, can be found here.

By default, this value is set to 60, which is configured more for server throughput, rather than desktop responsiveness.On my desktops I usually set the value to 10, which seems to be a good fit for desktop responsiveness.

The way to set it on Ubuntu is:

$ sudo sysctl vm.swappiness=10

and to make the changes permanent, just add the following line to /etc/sysctl.conf:

vm.swappiness=10

Wednesday, 18 April 2012

Internal LAN on LXC

When I wrote my previous post about setting up LXC, one of the things I found was that when installing the lxc package, it went ahead and created an "lxcbr0" interface.

It turns out that this interface is actually an "internal network" which you can connect your VM's to if you want them talking to each other directly, as opposed to any network which the host is also on.

To setup my VM's I just added another interface and connected it to the "lxcbr0" bridge, by adding the following lines to the configuration:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 4a:49:43:49:79:ef


and then configuring the interface in the "interfaces" file:

auto eth1
iface eth1 inet static
    address 10.0.3.2
    netmask 255.255.255.0


Then I did the same to another VM and they were able to talk to each other.

Monday, 9 January 2012

pwgen - Generate random passwords on Linux

There's a useful package in Debian/Ubuntu called pwgen, which allows you to generate random, human pronouncable (this is moot) passwords.

It works simply by running the 'pwgen' binary:

$ pwgen Teboo0sh Rahz3Jee aeWae1mn isheL9oo Ahbubo6o fie7ow7L eij3Re0i ieCheh2A oSae0pah uGu1Co0k Pa0PhieZ riope6Ie IeC6aiYi zie4Yahx Yoh0quae yab2iCae Ooqu2wei chel2ohG EeSh5jok hoxoZa7o He8gaale gao6EiSh Uo8loh1b Phie2gie Ehei7ais yeicoo4Z Een1ohcu duZ9ook6 aQuu3wei YuW4gaen soh8ueCh Phohwai5 bi9bu4Li ieWah7ae Aip5Ohv0 lieM1aiG raeF6voe Fooduo9a pohqu3Da Ahn0iRio Uwaech6U ne8Quu9b AhV3oNee zieG1thi Shai1Chu Zae0pie1 aet1geFe Ko8wi4go

It also comes with some useful options such as: -y which adds a random character to each password -N which allows you to specify the number of passwords generated (by default the entire terminal is filled up with passwords) -H which allows you to generate repeatable passwords by using a file and a piece of text to seed the random number generator You can install pwgen on Debian/Ubuntu using:

apt-get install pwgen

For a full list of options have a look at 'man pwgen'.

Sunday, 18 April 2010

Linux Hypervisor in 2.6.32

Just came accross this article on Slashdot, detailing a new feature of the 2.6.32 kernel, dubbed 'Kernel Shared Memory'.

The technology allows you to de-duplicate memory regions in virtual machines running on the hypervisor. What does this mean? It means that you can have a physical server with 10GB of RAM and have 3 times that number of VM's running on it with each having a full 1GB of RAM. In the release notes, there's even a reference to 52 Windows VM's running on a single 16GB server with 1GB memory each!

This of course depends a lot on what kinds of OS's you have running on the server (ideally you want the same OS on all of the VM's to maximise the duplicate memory) and also how similar the memory profiles of those VM's are. But my guess is that in most operating systems, there is a large amount of memory which is common, loaded at boot and then never written to.

From what I can tell, right now only the KVM hypervisor supports KSM, but seeing how general the technology is (it can be used by any process to de-duplicate memory pages) there's no reason why Xen couldn't easily make use of KSM.

Wednesday, 10 March 2010

Connecting to SQL Server from Perl on Linux

One of the tasks I recently had to do at the office was to get a program written in Perl and running on Linux to connect to and work with Microsoft's SQL Server. I thought that I would document the process here on my blog, in the off chance that it might help someone, and also so that I have a reference I can go back to in the future.

So, I went through the process of installing the latest stable FreeTDS driver. If you get the tarball and install using the 'configure, make, make install' method the files are installed under the '/usr/local' directory.

To create a connection to your SQL Server you have to modify the '/usr/local/etc/freetds.conf' file and add the details of your server. For example here is what the details of a SQL Server 2008:

# SQL Server 2008
[sql2008]
host = 231.321.452.871
port = 1433
tds version = 8.0

In order to test the connection you can use the 'tsql' command found under '/usr/local/bin' by passing it the Server, Username and Password variables:

tsql -S sql2008 -U [username] -P [password]

Note that if the connection is successful, you should get a very simple prompt (something like '1>') from which you can exit using the 'quit' or 'exit' commands.

So, now that we know that we have the FreeTDS driver working, we can set about getting access to this interface programatically, i.e. from our program.

For the next step we're going to install the DBD::Sybase module from CPAN and configure it to work with FreeTDS. When I first tried this I attempted to run the following command:

sudo cpan DBD::Sybase

This however, resulted in the following error:

Please set SYBASE in CONFIG, or set the $SYBASE environment variable at Makefile.PL line 103, <in> line 45.

This was a relatively simple fix which involved going into the '/root/.cpan/build/DBD-Sybase-1.09/CONFIG' file and changing the line:

SYBASE=$ENV{SYBASE}||'/opt/sybase'

To:

SYBASE=/usr/local

After making the change, run the command 'perl Makefile.PL' and just leave all of the values set to the default and when it comes to the part where it asks you for the server, database, username and password in order to test the install, enter the credentials for your server.

The next part is to simply run the 'make' command, which will go and compile the module. Trying this gives the following errors:

dbdimp.c:786: error: ‘BLK_VERSION_150’ undeclared (first use in this function)
dbdimp.c:786: error: (Each undeclared identifier is reported only once
dbdimp.c:786: error: for each function it appears in.)
dbdimp.c:790: error: ‘BLK_VERSION_125’ undeclared (first use in this function)
dbdimp.c:794: error: ‘BLK_VERSION_120’ undeclared (first use in this function)


In order to fix these errors, simply add the following lines after any #include statements in the 'dbdimp.c' file:

#define BLK_VERSION_150 BLK_VERSION_100
#define BLK_VERSION_125 BLK_VERSION_100
#define BLK_VERSION_120 BLK_VERSION_100

Run 'make' again, and this time it should build without any errors (although not without warnings). The next step is to run the 'make test' command to test the compiled code. Again, you should note that I've never gotten the code to pass all of the tests, my theory is that some of the tests are specific to a Sybase database and fail with SQL Server.

Once 'make test' finishes, run 'make install', which simply copies the compiled files accross to the correct place in the filesystem. Now, to test that what we've just done works, take the following code, paste it into a file, update it for you database and see if you get the correct return values:

#!/usr/bin/perl

use DBI;

$dsn = 'DBI:Sybase:server=sql2008';

my $dbh = DBI->connect($dsn, '[username]', '[password]');
die "Unable to connect to server $DBI::errstr" unless $dbh;

$dbh->do('USE [database]');

$query = 'SELECT * FROM [table]';
$sth = $dbh->prepare ($query) or die 'prepare failed\n';
$sth->execute() or die "unable to execute query $query error $DBI::errstr";

$rows = $sth->rows ;
print "$rows rows returned by query\n";

while ( @first = $sth->fetchrow_array ) {
foreach $field (@first) {
print "$field, ";
}
print "\n";
}

That's it! Hopefully you're able to access SQL Server through Perl running on Linux now. For more information please have a look at the links below:
http://www.perlmonks.org/?node_id=392385
http://www.n3ncy.net/UNIX/FreeBSD/FreeTDS.htm
http://lists.ibiblio.org/pipermail/freetds/2006q3/020583.html

Thursday, 11 June 2009

Setting up a VPS - Part 2 - Postfix Virtual Domain/Users

The next step in setting up the VPS, was installing and configuring the mail server. For this job, I've gone with the current king of MTA's - Postfix. The basic approach I've gone with when setting this up is to start simple and then add functionality bit by bit. In order to do this I've basically followed the guide found here. In the end I've ended up with support for virtual domains (seperate domains) and virtual users (non-UNIX users) with a flat file backend. I don't have much to add to this tutorial, except to point out that in the setup where it says 'virtual_uid_maps = static:5000' - this means that the process which is trying to deliver the message (i.e. write to disk and create any files/folders necessary) will be running as this user. So, there's no point in setting it to 5000, unless there is a user with that id, which has write access to the virtual domain folder.

I've also had to add some directives to prevent the mail server being flooded with spam. Directives which check that the server comes from a FQDN as well as checking that the IP Address isn't on any blacklists. i.e.

# Wait until the RCPT TO command before evaluating restrictions
smtpd_delay_reject = yes

# Basics Restrictions
smtpd_helo_required = yes
strict_rfc821_envelopes = yes

# Requirements for the connecting server
smtpd_client_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_rbl_client bl.spamcop.net,
reject_rbl_client dnsbl.njabl.org,
reject_rbl_client cbl.abuseat.org,
reject_rbl_client sbl-xbl.spamhaus.org,
reject_rbl_client list.dsbl.org,
permit

# Requirements for the HELO statement
smtpd_helo_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_non_fqdn_hostname,
reject_invalid_hostname,
permit

# Requirements for the sender address
smtpd_sender_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_non_fqdn_sender,
reject_unknown_sender_domain,
permit

# Requirement for the recipient address
smtpd_recipient_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_non_fqdn_recipient,
reject_unknown_recipient_domain,
reject_unauth_destination,
permit

These directives originally came from the email section of an article on howtoforge.com about setting up Mandriva Directory Server.

There's still a lot of work to go with setting up this email server, I haven't even got to setting up Dovecot and SASL. Then I want to set up Amavis and combine it with ClamAV and SpamAssassin(with Baysian filtering and feedback). I also need to setup DKIM, both for signing mail coming from the server and for checking incoming DKIM messages and ofcourse as always there's a need for a decent web front end, to enable you to check your mail. I've been hearing good things about Google Apps, but I don't know anyone that's set it up on their own servers. I wonder whether that's even possible or whether you have to use google's mail servers?

So many technologies, so little time... and this is only setting up the email :)

Monday, 1 June 2009

Setting up a VPS - Part 1 - Hosting, SSH Security and ntp

Got a VPS from an outfit here in NZ called HostingDirect. Opted for Ubuntu 64-bit edition with the Small VPS package (128MB RAM, 10GB disk, 1 IP address). Also got domain registration (cheapest in NZ) and hosting with them which comes with free website hosting, which is nice.

The configurable options in the VPS setup allowed you to select LAMP setup for $150, Email server (SMTP, POP3, IMAP) for $60 and Security Tools for $45. I thought these prices were a bit steep, especially since the Small VPS package only cost $25/month after GST. But then I reminded myself what I charge for setting up such systems and it made sense. I didn't opt for these services, preferring to set them up myself.

So the VPS was provisioned in the afternoon on the 28th but I didn't have time to start configuring it until that night when I came home. By time I started having a look at it, there were already signs of brute force attacks on the ssh server. So the first thing I did was to create a new non-root user and add him to the 'admin' group which was already setup in the sudoers file (mimicking the typical Ubuntu setup). From here I disabled the root ssh login and changed the ssh port to 222. Later I changed the ssh port back to the standard 22 and installed a great new piece of software I found called 'fail2ban' which bans login attempts for a period of time based on the number of unsuccessful login attempts.

Before sorting out the ssh server and fail2ban, I did the obligatory 'apt-get update' followed by an 'apt-get upgrade' which all ran fine. I also did a check on the version of Ubuntu and kernel, with the follwing results:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 8.04.2
Release: 8.04
Codename: hardy


$ uname -a
Linux example.org 2.6.24-23-xen #1 SMP Mon Jan 26 03:09:12 UTC 2009 x86_64 GNU/Linux

So I ended up with Ubuntu 8.04 LTS 64-bit version, which is exactly what I wanted. Shopping around for NZ VPS sellers, I found that a lot of them offered Ubuntu 7.10, which I found strange. I would think more people would prefer the long term release, maybe something to do with stability issues of each distribution running on Xen.

The next thing to set up was the ntp deamon, whch was quite straight forward and only involved adding the line 'server nz.pool.ntp.org' to the '/etc/ntp.conf' file and restarting the ntp daemon.

The VPS also came with access to XenShell, which is a way to administer your VPS through Xen (kind of like VMWare's server console). I've never worked with XenShell before so I'll have to look for a good tutorial to figure out how to make use of this tool.

That's all for today, it's late now and tomorrow I'll start setting up Postfix and all the neccessary extras, a task which it is much better to attempt with a clear head.

Thursday, 27 March 2008

My new laptop - Acer Aspire 4315 Review

I needed a simple, cheap laptop that I could take to university so that I could work on stuff there and not have to hope and pray that there'll be enough working computers for me when I get to university.


The Requirements
The requirements for my laptop weren't great. All it had to do was have WiFi (802.11 b/g) and be powerful enough to run a simple development web server (LAMP). The other requirement and the most important one was that it had to be cheap. After looking on the Laptop section of PriceSpy I came upon the Acer 4315 going for $699NZD (plus a $99 cash back offer from Acer) which was the same price as the Asus EEE (after the cash back). The laptop came preloaded with Ubuntu 7.10, which was great for me, since I probably would have installed it anyway. So it was a tough decision, whether to go for extreme portability and just a really cool little gadget or to go for a rather standard (if a little old laptop).

I made up my mind when Dick Smith had a computer sale and discounted their laptops by 10% lowering the before cashback price of the Acer to $630NZD. So one Saturday I went down to the store and brought it home.


You get what you paid for
For $630 you don't get a laptop bag or any other extras. Just the laptop and the power cord. Although the salesman offered a upgrade package including more ram, a bag and something else (usb mouse??) for $99 which i declined. Other than the laptop and the power cord there was also the warranty booklet from Acer, a pamphlet on how to put in the battery and turn on the laptop and two instructional bits of paper.

Two bits of paper
The first piece of paper starts off by thanking you on purchasing the computer loaded with Ubuntu, followed by a blurb about linux and Ubuntu in general that sounds like it came from the Ubuntu marketing department.


Near the bottm of the page a section titled "Some system limitations" informs the users that the E key, Wireless key, modem and microphone are all disabled "due to limitations of Linux". (The E key I'm guessing launches some kind of Acer software package that came with the laptop and the wireless key enables/disables the wireless card). It would have been more accurate of Acer to say that the E key is disabled because their developers put out software only designed to run on Windows. The wireless key does work, but in an interesting way. When connected and you push the wireless button it will disable the wireless card, but in such a way that the network manager app doesn't know that the device has been shut down, so the interface is still active and it tries to connect, but ends up failing. Pressing the wireless button again and telling the network manager to connect to the network again seems to work for me. The microphone doesn't work at all. The modem I haven't tried, but I don't really need an analog modem with my laptop anyway.

The other side of the first bit of paper tells you how to set up an account once the computer is turned on. It basically consists of selecting your language, time zone, keyboard layout and your name, username and password. It also tells you how to create a regular user account once you log in.

The second bit of paper tells you how to connect to a wireless network on one side and on the other it gives a detailed list of how to install the automatix dvd and playback codecs.


About the Ubuntu installation
  • The laptop came with Ubuntu Gutsy 7.10 32-bit installed
  • The main partition is ext2, not the usual ext3. Leading to a faster system, albeit a less secure one for your data. (ext2 does not have journaling)
  • The swap partition is not encased in a linux extended partition but is directly mapped onto the hard drive and has a size of ~4GB
  • The computer name is set to ASUS
  • The Atheros wireless driver is enabled from first boot

The Cashback
In order to get the cash back offer it turns out you have to go to acer's website and register which model you bought, where and when. Registering was easy, but on the website the folks at Acer try to get you to abandon your cash back offer and instead use the money to buy their extended warranty, with the website claiming that the average laptop service costs $328. I just skipped this part and asked for the money. It turns out that in order to get your cashback you have to send in the barcode from the box the laptop came in, along with your receipt to an address in Australia, within 30 days or no deal. Not to mention the fact that if you don't register on the website within 14 days of your purchase you can also forget it. But the best part is in their terms when they say to allow up to 8 weeks after they receive the request to receive the cash. Seems like a bit of a double standard to me, them giving you only 4 weeks to send your barcode and receipt in, but allowing themselves up to 8 weeks to send you your money. So it seems it'll be sometime in June when I get my money back, a nice little birthday present for me :)


Issues I have with the laptop
  • microphone doesn't work
  • suspend doesn't work
  • specialised buttons on side don't work (wireless button works but not fully)
  • screen flickers when plugging/unnplugging the power cord
  • to disable the touchpad while writing you have to press the 'function' button and one of the F keys.
  • the latch at the front makes it sometimes tricky to open the laptop, requiring you to use your nail. But I do like having a latch.
All in all, I'm quite pleased with my first laptop. I wanted something basic for a bargain price and I found it. The lack of a microphone is a shame, but I can't remember the last time I used a microphone anyway and the lack of suspend was to be expected. I have yet to install the webserver software so it'll be interesting to see how well it runs. I'm also liking the rough plastic finish on the outside of the laptop, it gives it quite a solid look. The mousepad also feels quite good, compared to some other laptops I've had a go with. Another good point is the fast boot time, which feels even faster than my desktop computer. This is most likely due to the decision to use the 32-bit version of Ubuntu and to use the ext2 file system as opposed to the slower ext3.

Tuesday, 18 March 2008

So there's only web development...

So, after talking to a friend of mine he came up with a statement that I believe sums up the New Zealand IT industry pretty well. "Its all web development". The industry here contains a few large companies, which hire some CS/SE graduates, but the vast majority will find themselves after university working on some aspect of developing essentially web pages.

For someone who doesn't particularly enjoy web development, this is a rather bitter pill to swallow, learning that if you want to work in any other aspect of IT, you should probably move to Australia/UK/America.

What's even more disappointing is that during my 5 and a half years at university, there was hardly any emphasis placed on web development. I think in total there were 2 courses that dealt with any kind of web development. This leaves a large gap in my education, as people that are hiring are looking for experience with CSS, Javascript, AJAX, technologies which the university does not teach at all, and PHP/ASP.NET, MySQL/MSSQL which the university does a poor job of teaching. In fact looking at the curriculum the only thing that the Computer Science degree seems to prepare you for is more Computer Science.

So anyway... It's no use bitching about the past now. If I'm doomed to become a web developer then I might as well become the best fucking web developer this side of the equator. But this is going to involve a lot of learning. The type of learning I dread and generally avoid. Learning by yourself, in your own spare time. Having to force yourself to read another chapter after coming home from work tired and worn out from your shitty job. The really hard kind of learning... fuck.

But I do in fact have a plan. And having a plan keeps me from having a total nervous breakdown, something which I've been really close to this last month.

My plan essentially involves learning everything about web technology, from the ground up, from setting up a LAMP server to AJAX. This is an ambitious goal for me, one which might take over a year or more to complete as I don't know how much time I will be able to commit to my "2nd education". But I do know roughly what it will involve.

Step 1: Wiring up my house. That is to say put in ethernet cables connecting the bedrooms of the house, the living room and the garage. This will allow me to put my de facto webserver (old HP pentium 4 my girlfriend was going to throw out) into the garage and to have it running full time. This computer is a mixture of web server and storage server (after adding a 320GB HDD) and will serve files to the internet and to the different devices around the house (in the future I could have a separate machine as a file server and a webserver) This should also give me experience on how to set up a network for a SOHO (Small Office Home Office).

Step 2: Domain Name. Opening an account with a dynamic dns provider and setting up a domain name. Probably going to use dyndns.com as it seems to be quite popular.

Step 3: Learning about webservers, file servers, ssh servers, nfs servers, ftp servers, proxies etc... Because I want my web/storage server to be universally accessible it needs to be able to serve files across a wide range of protocols. For each of the types of server I need to
  1. Install the software
  2. Configure the software i.e. get it to do what I want securely
Luckily I have found a resource that deals with these issues and a lot more. The resource being www.linuxhomenetworking.com. The website is well written, easy to read and up to date. It's also free.

[NOTE: I should mention now that I intend to use only OSS software as a part of this education, for reasons which I will probably write about later]

Step 4: Install software for the management of my server. This includes things like Webmin, MySQL Administrator, PHPMyAdmin and perhaps some software to configure Apache (is there a decent GUI frontend for Apache or does it come down to editing config files?)

Step 5: Installing existing CMSs' and understanding how they work. Looking at software such as Joomla!, Drupal, Wordpress, Blogger (is the blogger source code available?), Silverstripe (Go NZ!) etc... and looking at how they are made, paying particular attention to how they handle extensibility (add-ons, extensions) and theming. However for the basic ideas on how to create a CMS I'll probably start with php-mysql-tutorial.com. It should be interesting to see how much difference in the designs of these CMSs' there is and the benefits/disadvantages of each approach.

Step 6: Learn Cascading Style Sheets. I have a real love/hate relationship with CSS. That is to say I f***ing hate CSS. As far as I'm concerned, CSS is a great idea (separation of content and presentation), implemented in a totally illogical, counter-intuitive, overly complex way. But to be fair, that's what I thought of a lot of programming languages I learned until I "got" them. So maybe sometime in the future I will really love CSS, but I wouldn't bet money on it. I'm probably gonna go with the tutorials from W3Schools or failing that get a book from the university library on CSS.

Step 7: Learn Javascript. Again, I feel like I'm 10 years behind the learning curve with this one, but thanks to university (and my own lack of interest in web development) this is another area which I have hardly any experience with. With this one I'm gonna follow the W3Schools tutorials and if needed get a book from the uni library.

Step 8: Learn XHTML. Another one from W3Schools. Basically need to learn along with Javascript to be able to understand AJAX in step 9.

Step 9: Learn AJAX. Start with W3Schools tutorial on AJAX, then can move on to other examples on the internet. There's so many on the web that the biggest problem learning is going to be information overload. A book might be useful as well.

So then the question arises as to what to do with all of this new web development knowledge and the server sitting in my garage. The thing to do I guess would be to make a website to showcase my talents which could be a point of reference for people seeking examples of my skills/knowledge.

Also the idea of creating a photo gallery CMS based on the work I did with LaPhotographie is an idea I've had for a while. Such a system would, of course be open source and perhaps some day take off and become more than a pet project.

And the possibility of turning the setup in my garage into a web hosting company is there as well.

So there you go, that's my plan. It helps me to keep busy and ignore the fact that I'll probably end up doing web development for a long, long time.

Friday, 29 February 2008

Nuff said

Ubuntu's new website http://brainstorm.ubuntu.com/. Best idea ever. Nuff said.

Friday, 10 August 2007

Don't feel like downloading another CD?

Trying out Linux doesn't have to mean downloading a 700MB CD image and then having to burn it onto a CD. Thanks to the university software mirrors you have at least three places where you can go to get CD images for free.

Option 1:
http://mirror.cs.auckland.ac.nz/ - The Department of Computer Science software mirror. Clicking on the 'iso' or 'linux' folders will give you access to a large variety of the most popular distributions, including some less known ones. The mirror is kept fairly up to date and if they don't have what you're looking for you can always email the CS mirror maintainers and ask them nicely to help you out. The mirror has a host of other software as well and it's all FREE!!

Option 2:
http://intraftp.ece.auckland.ac.nz/ - The Department of Electrical and Computer Engineering FTP server. Although compared to the Computer Science mirror the FTP server has a sparse collection of Linux distro's, they do host an Ubuntu and Debian software repositories, which have most of the software programs you are ever gonna need.

Option 3:
http://dist.ec.auckland.ac.nz/ - This is the server which all of the Linux machines in the labs get their updates from. Not much in the way of CD images here, just a Debian, Ubuntu and possibly RedHat repository.

Once you've chosen your distro, download it to /tmp, under Linux or C:\usertmp. under Windows and then burn it to disc (most of the CS lab computers have CD burners). If you try to save the CD image to your afs drive (H: drive) it will most likely tell you that it can't do it due to a lack of space. The default allocation for Science students is about 300MB. If you download to the temporary folders though, don't expect to be able to access your files the next time you log in. The files in those directories are deleted regularly.

Sunday, 5 August 2007

Keep your Ubuntu System up to date for free

Have you ever been at home just browsing the net, when all of a sudden that orange update icon pops us? You click on it only to discover a new version of Open Office and an associated 200MB download. You look at your internet usage only to find that you are only 250MB away from your monthly limit with two days to go. So reluctantly you click to download and install the package and then realize you have to go find something to do for half an hour while the package downloads.
This how-to tells you how to avoid having to waste time and bandwidth by using the university's repository. It should be particularly useful for people with slow connection speeds or low monthly data allocations.

To do this we need to first write down the names of the packages that need updating, their version numbers and which section of the repository they are from (main, universe, multiverse).

1. Start up Synaptic and click on the section 'Installed (upgradable)' in the left hand listbox. In the right hand section of the window you should see the packages that need upgrading.

e.g. tcpdump from version 3.9.5-2 to 3.9.5-2ubuntu1

2. Right click on a package and select 'Properties'. In the 'Common' tab, there is a line which says 'Section: .....'
Packages from the main repository will just have the section name. Packages from the universe repository will have '(universe)' at the end of the line. Packages from the multiverse repository will have '(multiverse)' at the end of the line.

e.g. for our example package tcpdump it says 'Section: Networking' meaning that it is in the main repository.

We now have the info we need. Now we can go to uni and navigate to http://intraftp.ece.auckland.ac.nz/. This is the ftp server of the Department of Electrical and Computer Engineering and is only available at uni. The server has a lot of software available, so feel free to look around.

3. The part we are interested in is the ubuntu repository. So we are going to navigate to ubuntu --> pool --> [the repository of your package] --> [the first letter of your package] --> [the name of your package] -->

e.g. -> ubuntu --> pool --> main --> t --> tcpdump -->

4. Once there, simply find the files which match the version number of our upgrade and end in .deb

e.g. tcpdump_3.9.5-2ubuntu1_amd64.deb
tcpdump_3.9.5-2ubuntu1_i386.deb

5. Select the file which matches your processor architecture and save it to your USB drive.

6. Take it home and double click on the file. You will get a warning saying that the package is available through the internet and that you should install it from there. Ignore it and install the package.

If you have a laptop or you feel like taking your computer to uni one day the process is even easier. Simply open your sources.list file and add
Code:
deb http://intraftp.ece.auckland.ac.nz/ubuntu/ feisty main restricted universe multiverse
and then run apt-get or synaptic.

This method is also good for upgrading or installing new software.