<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup --> ---+!! Node Type: %CALC{"$SUBSTITUTE(%TOPIC%,NodeType,)"}% ---++!! Firewall requirements | *local port* | *open to* | *reason* | | 3401/udp | 128.142.0.0/16 , 188.185.0.0/17 | CMS central Squid monitoring based on SNMP | | 3128/tcp | 192.33.123.0/24 | T3 Local Squid access | ---------------- %TOC{title="Table of contents"}% ---+ Regular Maintenance work ---++ Updating Frontier <!-- #List any regular activities which do not run automatically and need an administrator's action. --> [[https://t3nagios.psi.ch/check_mk/index.py?start_url=%2Fcheck_mk%2Fview.py%3Fview_name%3Dservice%26site%3D%26host%3Dt3frontier01%26service%3DCheck%2520new%2520Frontier%2520RPM][t3nagios checks if there are new Frontier RPMs to be installed]]. If so during a T3 downtime you'll have to update by 1st stopping =squid= and afterwards : <pre> [root@t3frontier01 ~]# yum --disablerepo=* --enablerepo=cern-frontier update Loaded plugins: downloadonly, priorities, security, versionlock Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package frontier-squid.x86_64 11:2.7.STABLE9-20.1 will be updated ---> Package frontier-squid.x86_64 11:2.7.STABLE9-21.1 will be an update --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================================================================================================ Package Arch Version Repository Size ================================================================================================================================================================================================================================================================================ Updating: frontier-squid x86_64 11:2.7.STABLE9-21.1 cern-frontier 835 k Transaction Summary ================================================================================================================================================================================================================================================================================ Upgrade 1 Package(s) Total download size: 835 k Is this ok [y/N]: </pre> ---+ Emergency Measures <!-- #List any measures that must be taken in case of some major incident, e.g. whether a mailing #list must be contacted or whether other services need to be shut down, etc. --> Actually if =t3frontier01= goes down the CMS Jobs will use the CERN Squid frontiers and the CVMFS clients will use their local caches; hopefully you'll have the time to fix this VM. ---+ Services <!-- #List all the important services, their installation, configuration and how to start and stop them --> ---++ =lsof -u squid -P= <pre> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME squid 13103 squid cwd DIR 8,2 4096 130564 /root squid 13103 squid rtd DIR 8,2 4096 2 / squid 13103 squid txt REG 8,2 811848 167040 /usr/sbin/squid (deleted) squid 13103 squid mem REG 8,2 156928 263334 /lib64/ld-2.12.so squid 13103 squid mem REG 8,2 1926680 263335 /lib64/libc-2.12.so squid 13103 squid mem REG 8,2 145896 263336 /lib64/libpthread-2.12.so squid 13103 squid mem REG 8,2 599384 263357 /lib64/libm-2.12.so squid 13103 squid mem REG 8,3 217016 189 /var/db/nscd/group squid 13103 squid mem REG 8,3 217016 190 /var/db/nscd/hosts squid 13103 squid 0u CHR 1,3 0t0 3656 /dev/null squid 13103 squid 1u CHR 1,3 0t0 3656 /dev/null squid 13103 squid 2u CHR 1,3 0t0 3656 /dev/null squid 13103 squid 3u unix 0xffff8803381e96c0 0t0 3500924 socket squid 13106 squid cwd DIR 8,32 4096 268435584 /home/dbfrontier/squid/squid_cache squid 13106 squid rtd DIR 8,2 4096 2 / squid 13106 squid txt REG 8,2 811848 167040 /usr/sbin/squid (deleted) squid 13106 squid mem REG 8,2 156928 263334 /lib64/ld-2.12.so squid 13106 squid mem REG 8,2 1926680 263335 /lib64/libc-2.12.so squid 13106 squid mem REG 8,2 145896 263336 /lib64/libpthread-2.12.so squid 13106 squid mem REG 8,2 599384 263357 /lib64/libm-2.12.so squid 13106 squid mem REG 8,3 217016 191 /var/db/nscd/services squid 13106 squid mem REG 8,3 217016 189 /var/db/nscd/group squid 13106 squid mem REG 8,3 217016 190 /var/db/nscd/hosts squid 13106 squid 0u CHR 1,3 0t0 3656 /dev/null squid 13106 squid 1u CHR 1,3 0t0 3656 /dev/null squid 13106 squid 2u CHR 1,3 0t0 3656 /dev/null squid 13106 squid 3u unix 0xffff880338909380 0t0 3500932 socket squid 13106 squid 4u REG 0,9 0 3654 anon_inode squid 13106 squid 5u REG 8,7 68 1310729 /home/dbfrontier/squid_logs/cache.log squid 13106 squid 6u IPv4 3500939 0t0 %RED%UDP *:51795 http://linuxplayer.org/2012/02/why-squid-listen-on-high-udp-port-number%ENDCOLOR% squid 13106 squid 7w REG 8,7 29636757 1310737 /home/dbfrontier/squid_logs/access.log squid 13106 squid 8r FIFO 0,8 0t0 3500940 pipe squid 13106 squid 9w REG 8,32 26837640 271800258 /home/dbfrontier/squid/squid_cache/swap.state squid 13106 squid 10u IPv4 3500942 0t0 %BLUE%TCP *:3128 (LISTEN)%ENDCOLOR% squid 13106 squid 11w FIFO 0,8 0t0 3500941 pipe squid 13106 squid 12u IPv4 3500943 0t0 %ORANGE%UDP *:3401 http://etutorials.org/Server+Administration/Squid.+The+definitive+guide/Chapter+14.+Monitoring+Squid/14.3+Using+SNMP/%ENDCOLOR% squid 13106 squid 13u IPv4 15800738 0t0 TCP t3frontier01.psi.ch:3128->t3wn41.psi.ch:39457 (ESTABLISHED) squid 13106 squid 14u IPv4 15800794 0t0 TCP t3frontier01.psi.ch:3128->t3wn35.psi.ch:49764 (ESTABLISHED) squid 13106 squid 15u IPv4 15800829 0t0 TCP t3frontier01.psi.ch:3128->t3wn28.psi.ch:44577 (ESTABLISHED) squid 13106 squid 16u IPv4 15800873 0t0 TCP t3frontier01.psi.ch:3128->t3wn13.psi.ch:41743 (ESTABLISHED) squid 13106 squid 18u IPv4 15800839 0t0 TCP t3frontier01.psi.ch:37468->cvmfs02.racf.bnl.gov:80 (ESTABLISHED) squid 13106 squid 19u IPv4 15800932 0t0 TCP t3frontier01.psi.ch:3128->t3ui12.psi.ch:59661 (ESTABLISHED) squid 13106 squid 21u IPv4 15800934 0t0 TCP t3frontier01.psi.ch:54272->front15.cern.ch:80 (ESTABLISHED) unlinkd 13107 squid cwd DIR 8,2 4096 130564 /root unlinkd 13107 squid rtd DIR 8,2 4096 2 / unlinkd 13107 squid txt REG 8,2 4952 145185 /usr/libexec/squid/unlinkd (deleted) unlinkd 13107 squid mem REG 8,2 156928 263334 /lib64/ld-2.12.so unlinkd 13107 squid mem REG 8,2 1926680 263335 /lib64/libc-2.12.so unlinkd 13107 squid 0r FIFO 0,8 0t0 3500941 pipe unlinkd 13107 squid 1w FIFO 0,8 0t0 3500940 pipe unlinkd 13107 squid 2u CHR 1,3 0t0 3656 /dev/null </pre> ---+ Installation ---++ Squid Installation Read the [[https://twiki.cern.ch/twiki/bin/view/CMS/SquidForCMS][CERN central wiki]]. Fabio uses these aliases, do the same, Puppet recipes are in =puppetdirnodes=: <pre> alias kscustom64='cd /afs/psi.ch/software/linux/dist/scientific/64/custom' alias ksdir='cd /afs/psi.ch/software/linux/kickstart/configs' alias puppetdir='cd /afs/psi.ch/service/linux/puppet/var/puppet/environments/DerekDevelopment/' alias puppetdirnodes='cd /afs/psi.ch/service/linux/puppet/var/puppet/environments/DerekDevelopment/manifests/nodes' alias puppetdirredhat='cd /afs/psi.ch/service/linux/puppet/var/puppet/environments/DerekDevelopment/modules/Tier3/files/RedHat' alias puppetdirsolaris='cd /afs/psi.ch/service/linux/puppet/var/puppet/environments/DerekDevelopment/modules/Tier3/files/Solaris/5.10' alias yumdir6='cd /afs/psi.ch/software/linux/dist/scientific/6/scripts' </pre> Puppet recipes ordered top down : 1 =SL6_frontier.pp= 1 =SL6.pp= 1 =tier3-baseclasses.pp= ---++ Squid conf =/etc/squid/squid.conf= * %ORANGE%CERN%ENDCOLOR% monitoring connections by SNMP * %BLUE%T3%ENDCOLOR% file requests * %GREEN%local cache%ENDCOLOR% %TWISTY% <pre> # grep -v \# /etc/squid/squid.conf | strings acl NET_LOCAL src 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 %BLUE%192.33.123.0/24%ENDCOLOR% 127.0.0.1/32 acl HOST_MONITOR src 127.0.0.1/32 %ORANGE%128.142.0.0/16 188.184.128.0/17 188.185.128.0/17 131.225.240.232/32%ENDCOLOR% acl snmppublic snmp_community public acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl SSL_ports port 443 acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow NET_LOCAL http_access allow localhost http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin cache_mem 500 MB maximum_object_size_in_memory 128 KB cache_dir ufs %GREEN%/home/dbfrontier/squid_cache%ENDCOLOR% 25000 16 256 maximum_object_size 1048576 KB logformat awstats %>a %ui %un [%{%d/%b/%Y:%H:%M:%S}tl.%03tu %{%z}tl] "%rm %ru HTTP/%rv" %Hs %<st %Ss:%Sh %tr "%{X-Frontier-Id}>h %{cvmfs-info}>h" "%{Referer}>h" "%{User-Agent}>h" access_log /var/log/squid/access.log awstats logfile_daemon /usr/libexec/squid/logfile-daemon cache_log /var/log/squid/cache.log cache_store_log none mime_table /etc/squid/mime.conf pid_filename /var/run/squid/squid.pid strip_query_terms off unlinkd_program /usr/libexec/squid/unlinkd refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i /cgi-bin/ 0 0% 0 refresh_pattern . 0 20% 4320 negative_ttl 1 minute acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache collapsed_forwarding on cache_mgr squid cache_effective_user squid cache_effective_group squid umask 022 snmp_access allow snmppublic HOST_MONITOR snmp_access deny all icp_port 0 icon_directory /usr/share/squid/icons error_directory /usr/share/squid/errors/English ignore_ims_on_miss on coredump_dir /home/dbfrontier/squid_cache </pre>%ENDTWISTY% ---++ Registering the local frontier to the central CMS operations Read https://twiki.cern.ch/twiki/bin/view/CMS/SquidForCMS#Register_Your_Server ---++ Registering the local frontier to the central WLCG operations Read https://twiki.cern.ch/twiki/bin/view/LCG/WLCGSquidRegistration Outcome was http://wlcg-squid-monitor.cern.ch/snmpstats/mrtgall/T3_CH_PSI_t3frontier.psi.ch/index.html ---++ Service startup/stop ---+++ =/etc/init.d/frontier-squid start= ---+++ =pgrep -fl squid= Testing whether service is running: <pre> 3245 /usr/sbin/squid -DF 3248 (squid) -DF </pre> ---++ Awstats statistics The statistics are *not* strictly required by CMS, or by CVMFS, but they are useful as all the statistics ; run locally =[root@t3frontier01 ~]# firefox http://localhost.localdomain/awstats/awstats.pl= to check them. ---++ Squid Testing Test to be moved into =t3nagios= Read 1st https://twiki.cern.ch/twiki/bin/viewauth/CMS/SAMSquidByHand then in =nagios@t3wn41:/opt/nagios/test_squid= there is a =.sh= script that has to return =%GREEN%OK%ENDCOLOR%= :</br> <pre> [root@t3wn41 ~]# /opt/nagios/test_squid/test_squid.py.sh node: t3wn41.psi.ch SiteLocalConfig: %RED%/cvmfs/cms.cern.ch/SITECONF/local/JobConfig/site-local-config.xml%ENDCOLOR% Contents of site-local-config.xml are: <site-local-config> <site name="T3_CH_PSI"> <event-data> <catalog url="trivialcatalog_file:/cvmfs/cms.cern.ch/SITECONF/local/PhEDEx/storage.xml?protocol=dcap"/> <catalog url="trivialcatalog_file:/cvmfs/cms.cern.ch/SITECONF/local/PhEDEx/storage.xml?protocol=xrootd"/> </event-data> <source-config> <cache-hint value="application-only"/> <read-hint value="auto-detect"/> <statistics-destination name="cms-udpmon-collector.cern.ch:9331" /> </source-config> <local-stage-out> <command value="srmv2"/> <catalog url="trivialcatalog_file:/cvmfs/cms.cern.ch/SITECONF/local/PhEDEx/storage.xml?protocol=srmv2"/> <se-name value="t3se01.psi.ch"/> <option value="-debug"/> <phedex-node value="T3_CH_PSI"/> </local-stage-out> <calib-data> <frontier-connect> <proxy url="http://t3frontier.psi.ch:3128"/> <backupproxy url="http://cmsbpfrontier.cern.ch:3128"/> <backupproxy url="http://cmsbproxy.fnal.gov:3128"/> <server url="http://cmsfrontier.cern.ch:8000/FrontierInt"/> <server url="http://cmsfrontier1.cern.ch:8000/FrontierInt"/> <server url="http://cmsfrontier2.cern.ch:8000/FrontierInt"/> <server url="http://cmsfrontier3.cern.ch:8000/FrontierInt"/> </frontier-connect> </calib-data> </site> </site-local-config> site: T3_CH_PSI loadtag: None script version: $Id: NodeTypeCmsFrontier.txt,v 1.24 2015/06/08 09:28:19 fabiom Exp $ Using Frontier URL: http://cmsfrontier.cern.ch:8080/FrontierProd/Frontier Query: SELECT 1 FROM DUAL Query started: 06/03/15 14:28:03 CEST squid: http://t3frontier.psi.ch:3128 Frontier Request: http://cmsfrontier.cern.ch:8080/FrontierProd/Frontier?type=frontier_request:1:DEFAULT&encoding=BLOB&p1=eNoLdvVxdQ5RMFRwC/L3VXAJdfQBACyLBKw= Query ended: 06/03/15 14:28:03 CEST Query time: 0.05 [seconds] Query result: <?xml version="1.0" encoding="US-ASCII"?> <!DOCTYPE frontier SYSTEM "http://frontier.fnal.gov/frontier.dtd"> <frontier version="3.34" xmlversion="1.0"> <transaction payloads="1"> <payload type="frontier_request" version="1" encoding="BLOB"> <data>BgAAAAExBgAAAAZOVU1CRVIHBgAAAAExBw==</data> <quality error="0" md5="2e47f41c56b898fb582b7ecf1e8686cc" records="1" full_size="25"/> </payload> </transaction> </frontier> Fields: 1 NUMBER Records: 1 %GREEN%OK%ENDCOLOR% /root </pre> ---++ Remote Monitoring vs PSI * http://wlcg-squid-monitor.cern.ch/squid_monitors.txt ( look inside for the string PSI ) * http://wlcg-squid-monitor.cern.ch/snmpstats/mrtgall/T3_CH_PSI_t3frontier.psi.ch/index.html %N% <-- it means that =t3frontier01.psi.ch= has been also properly registered in GOCDB * http://wlcg-squid-monitor.cern.ch/snmpstats/mrtgcms/psi/proxy-hit.html * http://wlcg-squid-monitor.cern.ch/snmpstats/mrtgcms/psi/proxy-srvkbinout.html * http://wlcg-squid-monitor.cern.ch/snmpstats/mrtgcms/psi/proxy-obj.html ---+ Backups OS snapshots are nightly taken by PSI VMWare Team ( like Peter Huesser ) + we have LinuxBackupsByLegato to recover a single file.
NodeTypeForm
Hostnames
t3frontier01, DNS alias t3frontier = t3frontier01
Services
CMS-Frontier and
CVMFS
Squid cache
Hardware
PSI DMZ VMWare cluster
Install Profile
t3frontier
Guarantee/maintenance until
n.a.
This topic: CmsTier3
>
WebHome
>
AdminArea
>
NodeTypeCmsFrontier
Topic revision: r24 - 2015-06-08 - FabioMartinelli
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback