Tags:
tag this topic
create new tag
view all tags
<!-- keep this as a security measure: * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.LCGAdminGroup * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.LCGAdminGroup #uncomment this if you want the page only be viewable by the internal people #* Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.LCGAdminGroup --> ---+ Service Card for EMI-APEL %TOC% ---++ Definition APEL is the system that collects all the accounting statistics of Phoenix and pushes them to EGI central accounting database The current deployment at Phoenix is as following: * APEL Client (site service) running on =apel02= and publishing to the official APEL production server using an APEL SSM instance running locally on =apel02= * APEL parsers running on SLURM CREAM CEs cream[01-03] and pushing data to the APEL client on =apel02= * arc[01,02] do *not* send data to =apel02= but publish directly to the official APEL production server using =jura= (an ARC CE's component) to parse ARC accounting logs and a APEL SSM instance running locally on each CE to publish accounting data to the APEL server To be allowed to publish on official APEL production servers a formal request should be submitted through a GGUS ticket. ---++ Operations APEL client binary ( =apelclient=) is run by a cron job (managed by CFEngine) on =apel02= every day: <verbatim> [root@apel02:~]# cat /etc/cron.d/apelclient # APEL publisher once per day 17 6 * * * root /usr/bin/apelclient </verbatim> and it can be launched manually for testing purposes: <verbatim> [root@apel02:~]# /usr/bin/apelclient </verbatim> Similarly for the APEL parser running on =cream[01-03]= (cron entry managed by CFEngine): <verbatim> [root@cream01:~]# cat /etc/cron.d/apelparser # APEL parser once per day # 7 15 * * * root /usr/bin/apelparser </verbatim> that can be run manually to test the parsing and the correct data transfer to the DB running on the APEL client (on =apel02): <verbatim> [root@cream01:~]# /usr/bin/apelparser </verbatim> On ARC CEs =jura= is triggered automatically by A-REX every hour: this time interval is _hard-coded_ into the code and *cannot* be changed in the installed version of ARC CE (3.0.3). ---+++ Client Tools ---+++ Testing Accounting data can be published on APEL testing servers but a formal request to be allowed to publish on them must be submitted using a GGUS ticket. This is true for both APEL client and ARC CEs publishing through =jura=. ---+++ Start/Stop Procedures As seen in Operations, all APEL components are run by cron jobs, so simply disable them to avoid any accounting-related operations (parsing, publishing, aggregation and so on). Since a MySQL DBMS is running on =apel02= make sure it is up and running and started at boot time: <verbatim> [root@apel02:~]# service mysqld status mysqld (pid 5931) is running... [root@apel02:~]# chkconfig --list mysqld mysqld 0:off 1:off 2:off 3:on 4:on 5:on 6:off </verbatim> ---+++ Checking logs On CREAM CEs each time =apelclient= and =apelparser= run they logs to <verbatim> [root@apel02:~]# less /var/log/apel/client.log [root@cream01:~]# less /var/log/apelparser.log </verbatim> On ARC CEs information about =jura= runs can be found in the following logs: <verbatim> [root@arc02:~]# less /var/spool/nordugrid/jobstatus/job.logger.errors </verbatim> Since =jura= uses =ssmsend= to send accounting records to an APEL server ( _testing_ or _production_) useful information about the sending phase can be find in: <verbatim> # less /var/spool/arc/ssm/ssmsend.log </verbatim> ---++ Setup Let's describe how to install and configure the service on several types of machines according to their role. ---+++ Dependencies APEL Client does not have special dependencies and it runs its own MySQL DB locally on =apel02.lcg.cscs.ch=. APEL parsers running on CREAM CEs and =jura= on ARC CEs do not have any special dependencies. ---+++ Installation and Configuration ---++++ APEL Client Assuming that CFEngine did its job configuring the right YUM repo, installing machine's host certificates and the package =ca-policy-egi-core=, first of all install MySQL and change the root passwd: <verbatim> [root@apel02:~]# yum --enablerepo=epel install mysql-server [root@apel02:~]# service mysqld start [root@apel02:~]# chkconfig --level 345 mysqld on [root@apel02:~]# /usr/bin/mysqladmin -u root password '[...]' </verbatim> then we can install the EMI-3 APEL packages: <verbatim> [root@apel02:~]# yum -y --enablerepo=epel install apel-ssm apel-lib apel-client </verbatim> After the installation we can go on with the configuration; first of all the accounting DB: <verbatim> [root@apel02:~]# mysql -p -e "create database apelclient" [root@apel02:~]# mysql -p apelclient < /usr/share/apel/client.sql </verbatim> and then let's allow =apel02= and CREAM CEs push data into it (setting an appropriate passwd for the =apel= user): <verbatim> [root@apel02:~]# mysql -u root -p apelclient mysql> GRANT ALL PRIVILEGES ON apelclient.* TO 'apel'@'localhost' IDENTIFIED BY '[...paswd here...]'; Query OK, 0 rows affected (0.00 sec) mysql> GRANT ALL PRIVILEGES ON apelclient.* TO 'apel'@'cream01.lcg.cscs.ch' IDENTIFIED BY '[...passwd here...]'; Query OK, 0 rows affected (0.00 sec) mysql> GRANT ALL PRIVILEGES ON apelclient.* TO 'apel'@'cream02.lcg.cscs.ch' IDENTIFIED BY '[...passwd here...]'; mysql> GRANT ALL PRIVILEGES ON apelclient.* TO 'apel'@'cream03.lcg.cscs.ch' IDENTIFIED BY '[...passwd here...]'; Query OK, 0 rows affected (0.00 sec) </verbatim> We can now configure the APEL client on =apel02=: <verbatim> vim /etc/apel/client.cfg [db] hostname = localhost port = 3306 name = apelclient username = apel password = [...] # passwd set by the GRANT instruction [spec_updater] enabled = true # The GOCDB site name site_name = CSCS-LCG2 ldap_host = lcg-bdii.cern.ch ldap_port = 2170 [...] [unloader] enabled = false # "false" during tests; [...] # "true" during production </verbatim> APEL SSM sender has to be configured to publish on official APEL production servers in this way (upon request via a GGUS ticket): <verbatim> [broker] bdii: ldap://lcg-bdii.cern.ch:2170 network: PROD use_ssl: true [...] [messaging] destination: /queue/global.accounting.cpu.central </verbatim> It is also possible to publish on official APEL testing servers (upon request via a GGUS ticket) configuring the SSM server in this way: <verbatim> [broker] bdii: ldap://lcg-bdii.cern.ch:2170 network: TEST-NWOB use_ssl: false [...] [messaging] destination: /queue/global.accounting.cputest.CENTRAL </verbatim> ---++++ SLURM LRMS In Phoenix the LRMS runs on different machines than CREAM CEs, specifically on =slurm[1,2]= so they have to be configured (through CFEngine as usual) in order to produce and share the job accounting log files used by APEL parsers running on CREAM CEs: copy <verbatim> /usr/share/apel/slurm_acc.sh </verbatim> from a CREAM CE (do *not* install the package containing it, i.e. =apel-parsers=) to =slurm[1,2]=. Configure =slurm= on in order to run this script after each job: <verbatim> [root@slurm1:~]# vim /etc/slurm/slurm.conf [...] JobCompType=jobcomp/script JobCompLoc=/usr/share/apel/slurm_acc.sh [...] </verbatim> in this way =slurm= puts all job accounting data in =/var/log/apel/= and this directory is shared among =slurm[1,2]= and all CREAM CEs via CSCS production NAS. ---++++ CREAM CE APEL parsers are installed as a dependency of CREAM CE packages so no special action is required. About the configuration file, be sure it has the right permissions since it contains a passwd and then configure the parse with the right location of =slurm= accounting files: <verbatim> [root@cream01:~]# chmod 600 /etc/apel/parser.cfg [root@cream01:~]# vim /etc/apel/parser.cfg [db] hostname = apel02.lcg.cscs.ch port = 3306 name = apelclient username = apel password = [...] # same passwd configured on apel02 [site_info] site_name = CSCS-LCG2 lrms_server = slurm1.lcg.cscs.ch [blah] enabled = true dir = /var/log/cream/accounting filename_prefix = blahp.log subdirs = false [batch] enabled = true reparse = false type = SLURM parallel = false dir = /var/log/apel filename_prefix = slurm_acc subdirs = false [logging] logfile = /var/log/apelparser.log level = INFO console = true </verbatim> ---++++ ARC CE (Jura) On ARC CEs the collection and publishing of accounting data to APEL servers is performed by =jura=. On ARC version 3.X and newer is installed as part of ARC software, so no special action is required. It is configured by specific options in the ARC CE =/etc/arc.conf= configuration file: <verbatim> jobreport="APEL:https://mq.cro-ngi.hr:6162" jobreport_publisher="jura" jobreport_options="archiving:/opt/apel_accounting_backup/arc01_prod/,topic:/queue/global.accounting.cpu.central,gocdb_name:CSCS-LCG2,benchmark_type:HEPSPEC,benchmark_value:10,use_ssl:true" </verbatim> %ICON{"warning"}% Please note that =jobreport_options= *must* be comma separated options *on the same line* otherwise =jura= will not parse them correctly. To make =jura= publish accounting reports in an APEL-compliant format a _production_ server has to be configured directly (this is different from the configuration of an APEL client used by CREAM CEs: in that case a _broker_ is configured and this broker provides the URL of an available APEL server each time is queried). In the previous example the first option is used to configure APEL-compliant format for the accounting records and the chosen APEL _production_ server. The second option specifies =jura= as accounting agent and the third one comprises several other options: * =archiving=: path where =jura= is going to save each record that published successfully * =topic=: this is an APEL setting that must be set with the value specified in order to publish to the APEL _production_ infrastructure (a different _topic_ is specified when configuring for the _testing_ infrastructure) * =gocdb_name=: the site name as published on the GOC DB * =benchmark_type=: the same one published through the BDII * =benchmark_value=: idem * =use_ssl=: this option must be set to =true= to publish to APEL _production_ servers (and =false= when publishing to APEL _testing_ servers) The paths currently used on Phoenix to store the archived records of =arc[01,02].lcg.cscs.ch= on the NAS are: <verbatim> /opt/apel_accounting_backup/arc01_prod/ /opt/apel_accounting_backup/arc02_prod/ </verbatim> After any modification to =arc.conf= restart A-REX as usual to make =jura= use the new settings: <verbatim> # /etc/init.d/a-rex restart </verbatim> Special attention has to be paid when some of the previous settings are modified. =jura= is triggered as soon as A-REX is restarted and then runs every one hour: this time interval *cannot* be modified since it's hard coded (at least in ARC version 3.X). Moreover, configuration options are _embedded_ in the input files used by =jura= and this files must be edited manually to make a settings' change affect the accounting data not sent yet to an APEL server. This should be clearer in the description of the procedure to make an ARC CE publish to the APEL accounting infrastructure. ---+++++ How to make an ARC CE publish to the APEL infrastructure First of all let's see the procedure to test the publication of APEL-compliant accounting records by an ARC CE to an APEL server belonging to the _testing_ infrastructure, as described in one of the reference page (NorduGrid wiki). Open a GGUS ticket to the APEL team to notify your intention to make your ARC CE publish directly to their servers via =jura= providing your migration plan. They will reply asking to try to publish to their testing infrastructure first before attempting to publish to the production one. The ARC CE should be registered in the GOCDB with the role _gLite-APEL_ (same role used for an APEL client). Please consider that a change in the GOCDB can take a few hours to be propagated and effective. Once authorized configure the following =jobreport= section in =arc.conf= with the options specific for testing servers: <verbatim> jobreport="APEL:https://test-msg02.afroditi.hellasgrid.gr:6163" jobreport_publisher="jura" jobreport_options="archiving:/path/for/archiving/,topic:/queue/global.accounting.cputest.CENTRAL,gocdb_name:CSCS-LCG2,benchmark_type:HEPSPEC,benchmark_value:10,use_ssl:false" </verbatim> Of course the =archiving= path should be customized for each ARC CE: this path (created by hand and not by =jura=) is *extremely* important to keep a local copy of published records that otherwise would not be available to be re-published to APEL _production_ servers as described below. Please note the first option starting with a _APEL:_ string: in this way =jura= is set to produce APEL-compatible accounting records. To produce accounting records =jura= makes use of A-REX jobs' logs located in <verbatim> /var/spool/nordugrid/jobstatus/logs/ </verbatim> these logs are deleted after =jura= has been able to use them to generate accounting records, so usually the directory is populated only by the log files produced by A-REX during two consecutive runs of =jura=. Since accounting options are _embedded_ in this log files, it is important to change the value of those options in *all* the files present in this directory before the run of =jura= as triggered by the restart of A-REX. <verbatim> # cd /var/spool/nordugrid/jobstatus/logs/ # sed -i -e 's/loggerurl=APEL:.*/loggerurl=APEL:https:\/\/test-msg02.afroditi.hellasgrid.gr:6163/g' -e 's!accounting_options=.*!accounting_options=archiving:/path/for/archiving/,topic:/queue/global.accounting.cputest.CENTRAL,gocdb_name:CSCS-LCG2,benchmark_type:HEPSPEC,benchmark_value:10,use_ssl:false!g' ./* </verbatim> At this point A-REX can be restarted: <verbatim> # /etc/init.d/a-rex restart </verbatim> From now on the logs files in the aforementioned directory should contain the new values for the =jobreport= options. =jura= should run every hour using these files and generating APEL-compliant records temporary stored in a directory like: <verbatim> /var/spool/arc/ssm/test-msg02.afroditi.hellasgrid.gr/outgoing/00000000/ </verbatim> this directory should be normally empty if everything goes well and =jura= is able to run =ssmsend= (the binary to publish accounting records to an APEL server) without any problem as it can be checked in: <verbatim> /var/spool/arc/ssm/ssmsend.log </verbatim> If the publishing has been successful, the archived records should be found in the configured =archiving= directory and they can be used to re-publish the same accounting data to APEL _production_ servers after the publication to those servers has been performed correctly at least once. After checking on [[http://goc-accounting.grid-support.ac.uk/apeltest2/jobs.html][this page]] (it can take some time before the values are updated) if the published numbers are consistent with local accounting data, the publication to APEL _production_ servers can be attempted upon agreement with the APEL team in the same GGUS ticket filed to start the migration. The _production_ =jobreport= options for APEL _production_ servers in =arc.conf= are something like: <verbatim> jobreport="APEL:https://mq.cro-ngi.hr:6162" jobreport_publisher="jura" jobreport_options="archiving:/path/for/prod/archiving/,topic:/queue/global.accounting.cpu.central,gocdb_name:CSCS-LCG2,benchmark_type:HEPSPEC,benchmark_value:10,use_ssl:true" </verbatim> we specified a new archiving path (that must be create by hand as usual) just to avoid to mix testing and production records in this phase (after checking that everything is OK the two sets can be merged and only the production directory will be used to store newly accounted records). Before restarting A-REX we have to change the _embedded_ settings in the =jobstatus/logs/= files as well: <verbatim> # cd /var/spool/nordugrid/jobstatus/logs/ # sed -i -e 's/loggerurl=APEL:.*/loggerurl=APEL:https:\/\/msg.cro-ngi.hr:6163/g' -e 's!accounting_options=.*!accounting_options=archiving:/path/for/prod/archiving/,topic:/queue/global.accounting.cpu.central,gocdb_name:CSCS-LCG2,benchmark_type:HEPSPEC,benchmark_value:10,use_ssl:true!g' ./* </verbatim> and eventually A-REX can be restarted: <verbatim> # /etc/init.d/a-rex restart </verbatim> =jura= should be triggered at this restart and then every hour. The usual =ssmsend.log= can be checked for issues: <verbatim> /var/spool/arc/ssm/ssmsend.log </verbatim> A new directory should have been created by =jura= to temporary keep the APEL-compliant records ready to be sent to the production server: <verbatim> /var/spool/arc/ssm/msg.cro-ngi.hr/outgoing/00000000/ </verbatim> this directory should be emptied by each run of =jura=/=ssmsend=. The _productions_ archiving directory should be populated as well with the data related to the jobs run by the ARC CE after the new configuration has been made effective with the restart of A-REX. Upon confirmation by the APEL team that the publishing has been successful, previously archived records (publishing to _testing_ servers) can be re-published to the _production_ server configured above. Of course notify the APEL team that you are attempting to re-publish archived records specifying which time period your are considering. In order to do that the archived records need to be aggregated and put in the directory used by =jura= to temporary keep them before running =ssmsedn=. To aggregate archived records they should be put in the same file with the following structure: <verbatim> <?xml version="1.0"?> <UsageRecords xmlns="http://eu-emi.eu/namespaces/2012/11/computerecord"> <UsageRecord [...]> [...] </UsageRecord> <UsageRecord [...]> [...] </UsageRecord> </UsageRecords> </verbatim> where each =UsageRecord= is extracted from a single archived file (each archived file contains only one accounting record). Please note that the aggregated file is made up of only two lines: the first with the XML version specified and then all the UsageRecords and UsageRecord on the *same* line. The aggregated file should be named with a time-stamp like =yyyymmddhhmmss= and put in the *new* =outgoing= directory created *after* =arc.conf= has been modified with _production_ settings and A-REX restarted. %ICON{"warning"}% After a few trials and errors it has been figure out that =jura= is not able to manage aggregated accounting records file larger than about 100MB, so it is advisable to split large aggregated files in ~90MB single files of course keeping the same internal XML structure shown above. The aggregated file(s) has to be put in the =outgoing= directory shown above: <verbatim> /var/spool/arc/ssm/msg.cro-ngi.hr/outgoing/00000000/ </verbatim> between two consecutive runs of =jura= whose outcome can be checked in the usual log file: <verbatim> /var/spool/arc/ssm/ssmsend.log [...] 2014-03-24 12:49:20,925 - ssm2 - INFO - Sending message: 00000000/20140228234759 2014-03-24 12:50:18,586 - ssm2 - INFO - Waiting for broker to accept message. 2014-03-24 12:50:18,588 - ssm2 - INFO - Broker received message: 00000000/20140228234759 # in this case 20140228234759 is the name of one of the aggregated records # file created by hand [...] </verbatim> If everything went well and the APEL team confirms that the re-publication has been successful, accounting numbers can be checked on the official APEL web-pages (see on the following sections; it may take some time before those pages are updated) and the GGUS ticket eventually closed. %ICON{"warning"}% Please note that the FQDN and port of the server being used to publish to the APEL _production_ network can change from time to time. In this case a procedure similar to the one described to shift from testing to production should be followed (including the =sed= used to change the _embedded_ settings in =jobstatus/logs/= files). There is a list of available servers in the referred page of Nordugrid wiki but be careful if and when it has been updated. The server name and port shown in this page examples reflect the one currently used by Phoenix. ---+++ Upgrade Upgrade from EMI-2 to EMI-3 versions done from scratch since the 2 versions are not compatible. Upgrade to future versions of EMI-3 middleware to be tested when available. Upgrade of =jura= is performed automatically when upgrading ARC CE middleware and _usually_ no change in =arc.conf= settings regarding =jobreport= options are necessary. ---+++ Backup =apel02= FS is backed up on a regular basis through CFEngine and stored on CSCS-wide storage area =/store= (providing a tape back-up as well). Regarding ARC CEs, the directory on the NAS they use to archive APEL-compliant records is daily backed-up on the tape system at CSCS. ---++ Monitoring ---+++ Nagios The only specific check implemented so far is the one related to the =apelclient=: its exit code is checked and reported back to Nagios via NSCA. The =cron= entry to run =apelclient= runs a simple script implementing this check instead of launching =apelclient= directly: <verbatim> /usr/bin/apelclient.cron </verbatim> ---+++ Ganglia No specific checks implemented. ---+++ APEL Monitoring/Testing Official Web-pages There are a couple of useful official APEL web-pages showing information about the accounting data publications by WLCG sites: the [[http://goc-accounting.grid-support.ac.uk/apel/jobs2.html][first one]] shows the main accounting data (number of jobs, CPU time and wallclock time) and information about accounting records and DB updates. It is very useful to check if the site as a whole is publishing correctly every day. The [[http://goc-accounting.grid-support.ac.uk/apel/jobs2_withsubmithost.html][second one]] is similar to the first one but providing details about _submitting hosts_, i.e. the end points corresponding the the CEs (both CREAM and ARC) configured to publish to APEL servers (directly like ARC CEs or through an APEL client like CREAM CEs). An entry related to a CREAM CE is like <verbatim> cream03.lcg.cscs.ch:8443/cream-slurm-atlas </verbatim> while an entry related to an ARC CE is something like <verbatim> gsiftp://arc01.lcg.cscs.ch:2811/jobs </verbatim> While and APEL client or an ARC CE is configured to test to APEL _testing_ infrastructure [[http://goc-accounting.grid-support.ac.uk/apeltest2/jobs.html][this page]] should be used instead to get similar information and be able to check that everything is working fine before asking to be authorized to publish to APEL _production_ servers. Another APEL official web page to check on a regular basis is the one related to the [[http://goc-accounting.grid-support.ac.uk/rss/CSCS-LCG2_Sync.html][APEL Synchronisation Test]] where a comparison is shown between the APEL client local DB number of records and the corresponding value on the official APEL DB. If there is a mismatch between the two values and a _Warning_ or _Error_ has been raised a re-publication may be needed as described in one of the _Issues_. ---++ Manuals and References * [[https://twiki.cern.ch/twiki/bin/view/EMI/APELPublisherReferenceCard][Service Reference Card]] * [[https://twiki.cern.ch/twiki/bin/view/EMI/APELFAQ][FAQ]] * [[https://twiki.cern.ch/twiki/bin/view/EMI/APELClient][EMI APEL Client]] * [[https://twiki.cern.ch/twiki/bin/view/EMI/EMI3APELClient][EMI3 APEL docs]] * [[http://wiki.nordugrid.org/wiki/Accounting][NorduGrid wiki page about accounting]] ---++ Issues ---+++ DB migration from EMI-2 =apel= to EMI-3 =apel03= Please refer to this [[https://ggus.eu/ws/ticket_info.php?ticket=97623][GGUS ticket]] for an historical view of the issues related to the accounting faced during the migration to EMI-3. ---+++ Bugs in SLURM accounting logs parser Please see this [[https://ggus.eu/ws/ticket_info.php?ticket=98409][GGUS ticket]]. Currently the following files have been patched on =cream[01-03]= according to the ticket's provided solution: <verbatim> /usr/lib/python2.6/site-packages/apel/parsers/slurm.py /usr/lib/python2.6/site-packages/apel/common/datetime_utils.py </verbatim> ---+++ Re-publishing of APEL client accounting data in case of failing Sync Test If the [[http://goc-accounting.grid-support.ac.uk/rss/CSCS-LCG2_Sync.html][APEL Synchronization Test]] fails showing an error a re-publication of local DB records may be triggered to attempt to fix the mismatch between local and officially published accounting data. To do this the =gap mode= has to be enabled on the APEL Client: <verbatim> [root@apel02:~]# # vim /etc/apel/client.cfg [...] [unloader] interval = gap gap_start = 2014-02-01 gap_end = 2014-02-28 [...] [root@apel02:~]# /usr/bin/apelclient </verbatim> In the usual APEL Client's logs there should be some lines related to some newly (re)published records. After this remember to set back the APEL Client configuration to publish only the latest records: <verbatim> [root@apel02:~]# # vim /etc/apel/client.cfg [...] [unloader] interval = latest #gap_start = 2014-02-01 #gap_end = 2014-02-28 [...] </verbatim> After the re-publication has been triggered a few days may be required for the Sync Test to be updated and show an _OK_ status. %ICON{"warning"}% Please note that in the Phoenix current configuration only accounting data related to CREAM CEs are published through an APEL Client, so data related to ARC CEs are not affected by this kind of re-publication.
ServiceCardForm
Service name
APEL
Machines this service is installed in
apel02
Is Grid service
Yes
Depends on the following services
lrms
Expert
Gianni Ricciardi
CM
CfEngine
Provisioning
Razor
E
dit
|
A
ttach
|
Watch
|
P
rint version
|
H
istory
: r36
<
r35
<
r34
<
r33
<
r32
|
B
acklinks
|
V
iew topic
|
Ra
w
edit
|
M
ore topic actions
Topic revision: r36 - 2014-12-01
-
GianniRicciardi
LCGTier2
Log In
(Topic)
LCGTier2 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
Users
Entry point / Contact
RoadMap
ATLAS Pages
CMS Pages
CMS User Howto
CHIPP CB
Outreach
Technical
Cluster details
Services
Hardware and OS
Tools & Tips
Monitoring
Logs
Maintenances
Meetings
Tests
Issues
Blog
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
LCGTier2 Web
Users
Groups
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
P
P
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Warning: Can't find topic "".""
Account
Log In
E
dit
A
ttach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback