PSI Yum repository for Tier3
Introduction
PSI runs a customized, on site replicated version of the
Scientific Linux distribution; so far the Tier3 is running SL5.4 64bit but PSI already switched to SL5.7 like the default SL5 flavour to install. To avoid to set up by ourself an other service, we have decided to create our own yum repository like a sub directory of the general PSI one; that was worth to deploy our specific RPMs by simply using
yum
and a
yum pointer repo file
like described in this chapter.
M. Gasser commented in this way our preliminary request:
Hi Fabio,
Goto /afs/psi.ch/software/linux/dist/scientific/57/
There, you create a directory such as tier3 (not in the psi
subdirectory). We use so called all subdirectories to put all versions of an RPM in.
From there we just take the latest ones and put corresponding symlinks to snapshot directories, the date of creation is the name of such a
snapshot. I distinguish between unstable, testing and stable, this is quite handy, because when I try new things, I can point unstable to another snapshot as testing or stable.
However, it's up to you. To populate your repo you just can copy things or you can use 'repomanage' (package yum-utils,
see /afs/psi.ch/software/linux/dist/scientific/57/scripts/create_new_snapshots.sh or the manpage for an example how to use it).
When your repo is populated, you have to run the command 'createrepo' (package createrepo),
for instance in the directory you have the latest RPMS which you wanna install. The subdirectory repodata is made, where
the metadata of your RPMS are kept. On the client you have to edit /etc/yum.conf accordingly, to address this repo, or you add a separate repo config
file to /etc/yum.repos.d/ (see the existing yum.conf as an example).
Important, each time you change something in your repo you have to run createrepo again, even if you copy the same RPM again to the same place,
otherwise you'll get an error on the client. On the client it's often a good idea to 'yum clean all' to clear the yum cache, before running
yum install or whatever. So you are sure the repo data on your client corresponds to the latest repo data in your repo.
>
> Hi Marc
>
> could you please create, or advise how to do it, a directory like:
> /afs/psi.ch/software/linux/dist/scientific/57/psi/Tier3
>
> Fabio
Eventually I've created
/afs/psi.ch/software/linux/dist/scientific/57/Tier3/
HTTP Security of the repository
To prevent internet access to our Yum repository we've installed this file:
[martinelli_f@t3ui03 Tier3]$ cat /afs/psi.ch/software/linux/dist/scientific/57/Tier3/htaccess-intranet.acl
Order deny,allow
deny from all
allow from 129.129 172.16.0.0/12 192.33.120 192.33.123 192.33.126
Satisfy Any
Yum pointer file
On the Linux server where you want to deploy our T3 RPMs it's needed to create a file like:
[root@t3vmui01 ~]# cat /etc/yum.repos.d/Tier3.repo
[Tier3]
name=Tier3 repo
baseurl=http://linux.web.psi.ch/dist/scientific/57/Tier3/testing
enabled=1
gpgcheck=0
please pay attention to that
testing
, it's explained later and it might be an other string.
How to add new RPMs
To add a new RPM(s) simply:
- run
klog
- copy the RPM(s) into
/afs/psi.ch/software/linux/dist/scientific/57/Tier3/all
-
cd /afs/psi.ch/software/linux/dist/scientific/57/scripts
- Create a new snapshot repository by running:
./create_new_snapshots.sh Tier3
- Update the
testing
soft link , or create a new soft link some name
that points to this new snapshot.
[martinelli_f@t3ui03 scripts]$ ./create_new_snapshots.sh Tier3
### begin ./create_new_snapshots.sh ###
Sourcing configuration file /scripts/dist-config
Running ./create_new_snapshots.sh, this will take some time ...
ALL_DIRS is psi others update.i386 update.x86_64 epelp dagp nagios nonfree cluster fastbugs.i386 fastbugs.x86_64
The following directories will be processed:
### Tier3
Create snapshot for Tier3 ...
creating directory /afs/psi.ch/software/linux/dist/scientific/57/Tier3/20111219/ ...
------------------------------------------------------------------------
Using /afs/psi.ch/software/linux/dist/scientific/57/Tier3/all
to populate /afs/psi.ch/software/linux/dist/scientific/57/Tier3/20111219 ...
(all RPMS will be copied as soft links)
------------------------------------------------------------------------
Find newest rpms, please wait ... done.
Create softlinks for each ../all/TARGET.rpm in /afs/psi.ch/software/linux/dist/scientific/57/Tier3/20111219/:
ganglia-3.0.7-1.src.rpm
ganglia-devel-3.0.7-1.x86_64.rpm
ganglia-gmetad-3.0.7-1.x86_64.rpm
ganglia-gmond-3.0.7-1.x86_64.rpm
ganglia-web-3.0.7-1.noarch.rpm
quota-3.17-1.2.5.x86_64.rpm
sun-sge-bin-linux24-x64-6.2-5.x86_64.rpm
sun-sge-common-6.2-5.noarch.rpm
done.
Run createrepo in /afs/psi.ch/software/linux/dist/scientific/57/Tier3/20111219/ ...
8/8 - sun-sge-common-6.2-5.noarch.rpm
Saving Primary metadata
Saving file lists metadata
Saving other metadata
done.
### end ./create_new_snapshots.sh ###
[martinelli_f@t3ui03 scripts]$
Like it was showed, the Marc's
create_new_snapshots.sh
script creates a new repository, in this case into the directory
20111219
; to use it you need to change the
testing
soft link:
[martinelli_f@t3ui02 Tier3]$ pwd
/afs/psi.ch/software/linux/dist/scientific/57/Tier3
[martinelli_f@t3ui02 Tier3]$ ll
total 10
drwxr-xr-x 3 martinelli_f cms 2048 Dec 14 11:19 20111214
drwxr-xr-x 3 martinelli_f cms 2048 Dec 14 11:54 20111214-1154
drwxr-xr-x 3 martinelli_f cms 2048 Dec 19 17:21 20111219
drwxr-xr-x 3 martinelli_f cms 2048 Dec 20 12:27 all
-rw-r--r-- 1 martinelli_f cms 503 Dec 14 12:08 htaccess-intranet.acl
lrwxr-xr-x 1 martinelli_f cms 3 Dec 14 12:01 testing -> 20111214
[martinelli_f@t3ui02 Tier3]$ rm -f testing && ln -s 20111219 testing
[martinelli_f@t3ui02 Tier3]$ ll
total 10
drwxr-xr-x 3 martinelli_f cms 2048 Dec 14 11:19 20111214
drwxr-xr-x 3 martinelli_f cms 2048 Dec 14 11:54 20111214-1154
drwxr-xr-x 3 martinelli_f cms 2048 Dec 19 17:21 20111219
drwxr-xr-x 3 martinelli_f cms 2048 Dec 20 12:27 all
-rw-r--r-- 1 martinelli_f cms 503 Dec 14 12:08 htaccess-intranet.acl
lrwxr-xr-x 1 martinelli_f cms 8 Dec 20 12:38 testing -> 20111219
Basically case by case, and server by server, you have to decide if to use a new soft link or to recycle a past one.
Common case is 2 soft links
testing
newer, or the same, than
stable
and the development servers pointing to
testing
while the production servers pointing to
stable
.
Installing an RPM from the Yum repository
If the Yum pointer file is properly configured as well as the soft link on the
/afs/
filesystem then you simply execute
yum
like in the following example ( we're installing a custom quota RPM that understands
ldaps
):
[root@t3vmui01 ~]# yum install quota
Loaded plugins: kernel-module
Excluding Packages from 54 update
Finished
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package quota.x86_64 1:3.17-1.2.5 set to be updated
--> Finished Dependency Resolution
Beginning Kernel Module Plugin
Finished Kernel Module Plugin
Dependencies Resolved
===========================================================================================================================================================================
Package Arch Version Repository Size
===========================================================================================================================================================================
Installing:
quota x86_64 1:3.17-1.2.5 Tier3 358 k
Transaction Summary
===========================================================================================================================================================================
Install 1 Package(s)
Update 0 Package(s)
Remove 0 Package(s)
Total download size: 358 k
Is this ok [y/N]: y
Downloading Packages:
quota-3.17-1.2.5.x86_64.rpm | 358 kB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : quota 1/1
Installed:
quota.x86_64 1:3.17-1.2.5
Complete!
[root@t3vmui01 ~]#
Consolidate the repository
If you feel confident about the stability of the T3 RPM(s) you can apply the
createrepo
command directly to the
all
subdirectory, but it's usually better to work according to the snapshots mechanisms suggested by Marc.
--
FabioMartinelli - 2011-12-19