View allAll Photos Tagged rsync
Over the years I have lost so much work and photographs. These days, I print most things that I want to keep. A good backup strategy is as important as a disaster recovery plan, and knowing where your next meal will come from.
A while back I found Internxt, a Spain-based cloud storage company. They just get better and better. And tonight I discovered that LuckyBackup on MX Linux seamlessly connects to my account at Internxt. Deep joy!
Using rsync and cron behind an easy-to-use graphical wotsit, it just works.
| Annie | Lennox |
www.youtube.com/watch?v=jGjNDqv1yMc
Need more?
www.youtube.com/user/AnnieLennox
I think I've covered all of my bases here.
- Those of you who like music
- Those who like Annie Lennox
- Those who use Flickr
- Those who like Lord of the Rings
- Those who like awards shows - not sure if there's anybody who does. :)
This is a vertorama. You know the drill...
Blog entry ahead.
Ok...now for something completely different. In November I detailed my backup process:
www.flickr.com/photos/sumsion/3061288579/
Recent events with shetha's backup issues:
www.flickr.com/photos/shetha/3353476605
reminded me to take a look at my own, which I've updated below.
My Most Current Backup Process
As I explained in November, I want my data in at least three locations at all times. My backup process has changed just slightly from last year. Here's what I'm doing now:
- I installed an internal 500GB drive in my MacBook. This is plenty of room for now. My laptop is where I store all of my work and do all processing.
- I use rsync on my mac to copy/update all of my Lightroom files from my mac to a server that I have at work. This can be done on the internal network at work or externally from anywhere in the world. You don't need a "server" to do this since you can simply rsync between almost any two (or more) computers.
- I have a small external 750GB USB drive and I use Time Machine to make hourly copies of everything that's on my laptop. I can leave this at work or take it with me wherever I go.
Is this the perfect solution? No. But I end up with three copies of everything, even if there are hours when the data may only exist in one location - on my laptop. There is no such thing as too many backups, and there is no such thing as the perfect backup solution. Shetha was storing her data on a raid, which generally takes care of data-loss issues. I wish her the best in restoring her data.
That's all for now. If you have questions, you know where to find me.
I've added another Slug to my home network, but at this time I patched up with Debian rather than Unslung.
It's incredible how such small low cost network storage device can be so versatile. Thanks to the Linux/Hacker community.
ps. On the far right, my PAP2 serving my VoIP.
Running:
Apache2 to host my own website.
OpenSSH to secure login
FTP server
rTorrent + Screen
Firefly (formely known as "mt-daapd")
rSync - to run automated backups and yes I use Slug with Time Machine.
Webcam-Server
Amazon S3 WebDAV mounted and backing up files via rSync.
Future projects:
Get my UPS to shutdown my slugs graciously.
host Wordpress blog.
See a complete review of my Network Closet here: youtu.be/1MzRNGlDcLs
Upgrades since last photo includes a new NAS with 10G SFP+ interface. The old Buffalo NAS is now used as an rsync backup destination. I also installed 2 19u rails to push out the bottom half of the rack by 6cm. This allows enough depth for NAS installation in bottom half and plenty of room for wire management in the top half. I think its done for now...
There are 4 main subnets in my home network:
Main - Green cables connect all main subnet components. This includes a 24 port 1GbE switch and a 12 port 10GbE switch connected via a 10GbE SFP+ cable. This is the main network of my home connecting all computers, printers, wifi APs, Media Players, and a Synology RS3614xs NAS with 9 WD 3TB SE WD3000F9YZ HDD which acts as the media server and file server for all computers in my home. This NAS and computers in my study are on the 10GbE network.
Guest - Yellow cables connect guest connections to half of a 24 port 1GbE switch. Yellow cable also connects the guest VLAN to the main network access points. This subnet is isolated from the rest of the network.
Surveillance - Blue cables connect all video surveillance equipment to a 16 port 1GbE POE switch. This includes wiring for 10 security cameras and a Synology RS814+ NAS containing 4 WD 4TB SE WD4000F9YZ HDD. Currently have 8 HIK Vision security cameras running, 5 DS-2CD2032-I 4mm bullet cameras and 3 DS-2CD2132F-I 2.8mm dome cameras.
MODnet - Orange cables connect 4 set top boxes to the WAN through a 5 port 1GbE Switch for China Telecom Movie on Demand Internet TV service.
Interweb - Red Cables are outside of my network, which includes connection to the modem and the community network.
Concerning the photo, its another version using the 35mm Cron. Lit with two flood lights through umbrellas from front top and bottom, reflector at right side, and backside lighting with a 100w quartz halogen through umbrella. The shot is an overlay of several HDR tonemap images over an exposure fusion from a 4 shot 1EV step bracket.
My audio samples are stored in a FireWire external drive for use with the MacBookPro. I am migrating everything over to the MacBookAir and Apple told me that there is no way to use FW drives natively. WTF?!
So now I am copying everything over to the new drive. Now there are many ways you can do that. You can use OSX’s native copy and paste. But OSX copy and paste is slow and would go do file counting. It also had issues if you terminate file transfers in the middle of it.
But rsync does all these—and better at it. It is already built in inside OSX. Shell tools are better and more efficient. You can buy apps which do these kinds of things but why buy crappy apps when you have the best tool for it?
Syntax:
$ rsync -av [source] [target]
or what I did in my shell:
$ rsync -av /Volumes/Ra/Audio/ /Volumes/LaCie/Audio/
where:
-a is a flag to tell rsync to turn on archive mode, where “accessed date” and “modified date” will not be modified as a result of copying.
-v is the flag for verbose mode, which shows the files being copied as the process continues.
For the rest of what rsync can do, look no further than its built-in manual, of course:
$ man rsync
I know, it's a no brainer. But I continue to be surprised at what most people didn't know is readily available in OSX
# SML Workflow
Taken with the iPad 3, processed in Snapseed.
/ SML.20130130.IP3
/ #SMLOpinions #CCBY #SMLMusic #SMLUniverse #SMLPhotography #SMLEDU
/ #Mac #Apple OSX opinions #rsync #shell #AbletonLive #MacBookPro #MacBookAir #WTF #copy #file #transfer #howto #shell #man #edu #apps #commandline #Firewire #nerds #technology
Happy Thanksgiving everyone. I'm thankful for all of the wonderful help and feedback I've received from my Flickr friends. In the spirit of being thankful, I figured that I'd give back a bit and share the knowledge I've gained about backups. This info is all about backups with a Mac. You'll need to adapt to your needs.
This is a lot of info, but hopefully useful. Oh...and the image above is another look from outside of Paris, ID at sunrise.
Background
I'm a bit crazy when it comes to backing things up, so this has extended to my new photo library. After taking thousands of shots in my first six weeks with my D90, my laptop drive was quickly filling up. I'm a tech geek at heart, so I set about finding the best backup method for my needs.
How Much Is Too Much?
When it comes to backup, there's not too much. So what is ENOUGH? In my opinion, enough is having three copies of everything in three separate locations. If your computer and backup drive are both at your house, and your house burns down, your data is gone.
My Computer & The Problem
Computer: MacBook 2.4 GHz Intel Core 2 Duo with 4GB DDR3 :: OSX 10.5.5
Problem: My laptop goes with me, so it's not always close to an external drive for backup. I needed a solution that would work for multiple locations.
Option #1: My Previous Backup Process
I'm including this here since it will probably work for most of you that use a Mac. If you're fortunate enough to have Leopard, I hope you're taking advantage of Time Machine. This nifty built-in application silently backs up all of your data, every hour, to an external drive.
The problem with Time Machine is that it was created to back up to one specific external drive. There is a solution to this issue, however. Here's how to use two or more separate external backup drives with Time Machine:
- Get yourself a copy of "Do Something When" (DSW) here. Install it.
- Download my modified Applescript here. I would give credit to the originator of this script, but I don't know who that is.
- Open the script in Script Editor and follow the directions given there. Basically you simply need to change the names of the backup drives you're using.
- Follow the directions given in the script for adding options to DSW.
That's about it for option 1. The point of this is that you have two external drives in two locations. When you connect either one to your laptop, your Time Machine will fire up and create a backup, without any intervention from you. This option makes sure that you have your data in at least three locations: your laptop, backup drive #1 at home, and backup drive #2 somewhere else. If something bad were to happen, you would lose very little information.
Option #2: My Current Backup Process
As I've explained, I want my data in at least three locations. It helps me sleep a bit better. Option #1, above, is perfect if you have plenty of hard drive space on your laptop. I knew that I'd be running out of space within the next few months. With my switch from Aperture to Lightroom 2, I figured now was as good of a time as any to revamp my backup process. Here's what I'm doing now:
- I moved my Lightroom catalog to an external drive, which is connected to my laptop. This hurts the speed of Lightroom, but I'm ok with that.
- I had a friend of mine set up a Linux server, accessible via the Internet, with rsync. This server currently has 640GB of drive space available. Plenty for now. : )
- I created a script on my Mac that runs rsync. (You can use a script, cron, or something else of your choosing.) All of my images are now copied from my external drive to the server.
At this point I have my images in two separate locations. (My external laptop drive and the Linux server.) There are many options for your third location, such as an online backup/storage service that supports rsync, another external drive attached to the server, or what I've chosen:
I have a Mac Mini at home with an attached Drobo. The Drobo is redundant by itself, which is a great thing. My Drobo has just over a terabyte of space on it, which is perfect for backups. So here's how I got my data to the third location:
- I use rsync on the Mac Mini at home to sync my data from the online server to the Mini.
Conclusion...finally
All done. The cool thing about rsync is that it's bi-directional. In other words, I could install Lightroom on my Mac Mini (at home) and work on my images there OR on my laptop. Any changes done in either location would be synced to the online server and to each computer, provided that you're not updating images in both places at the same time.
Whew. That was a lot of info.
Questions? Please ask here in case others are wondering the same thing. If you are aware of a good, cheaply-priced online backup service that supports rsync, let me know.
Thanksgiving :: Thankful for Another Day :: Giving Back With My Backup Process on Flickriver
How to resume a large SCP file transfer on Linux
If you would like to use this photo, be sure to place a proper attribution linking to Ask Xmodulo
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
SystemImager and BitTorrent: a peer-to-peer approach to large scale OS deployment
von Erich Focht (NEC HPC Europe), Andrea Righi (CINECA), Brian Finley (Argonne National Laboratory), Bernard Li (Pharma startup in Bay Area)
Donnerstag, 31.05.2007, Saal 4: Kaiserslautern , 11:00-12:00 Uhr
This paper describes the integration of BitTorrent as a transport method to deploy OS images to a large number of clients using SystemImager (http://www.systemimager.org). Traditionally, the classical client-server model does not scale well, thus large deployments, such as big clusters or complex grid-computing environments, need to be staged. An alternative scalable and reliable method for image distribution needs to be implemented and BitTorrent seems to be a natural fit. Measurements with big clusters of hundreds of nodes show that the SystemImager BitTorrent deployment scales as expected and dramatically exceeds the performance of the existing multicast and rsync deployment methods. Experimental results of deployment time and bandwidth analysis on a large installation are documented in the paper to confirm the advantages of the peer-to-peer approach with respect to the other transports. The speed of the deployment and the support for heterogenous harware provides the means to deploy virtual clusters which are built on the same resource pool, where it is possible to dynamically re-initialize workspaces on user demand.
Über den Autor Erich Focht:
- studied physics at the RWTH Aachen
- PhD work in theoretical Physics (computational methods) at RWTH Aachen and von Neumann Institute for Computing (NIC) Juelich
- started to work for NEC as Application Analyst in 1996, tuning of numerical algorithms and CAE simulation codes for parallel vector supercomputers
- worked as Linux kernel developer for IA64 and NUMA systems, involved in ATLAS64 and LSE projects aiming at scalability of Linux on large systems
- currently working as solution architect, doing research and development for cluster and grid software
- open source projects involvement: OSCAR (currently chair), SystemImager, SystemInstaller, SystemConfigurator, Ganglia, Kerrighed SSI
Über den Autor Andrea Righi:
Andrea Righi was born in Colle di Val d'Elsa a wonderful town in the hearth of Tuscany (Italy) on 19th March 1978. He received the Laurea degree in Computer Engineering at University of Siena on 2004. Currently he is a system administrator in CINECA (http://www.cineca.it/en/index.htm), the most important supercomputing centre in Italy. He is also involved in the deployment and maintenance of the DEISA Supercomputing Grid Infrastructure (https://www.deisa.org), the European grid. Andrea is one of the active core developers of SystemImager (http://wiki.systemimager.org), a popular tool for installing and updating Linux clusters. He is an activist of OSS and a Linux user since 1999.
Über den Autor Brian Finley:
Brian Elliott Finley has specialized in operating systems, system architecture, and system integration since 1990. He holds a number of technical certifications, writes articles for industry publications, and is the creator of SystemImager, the popular Linux installation and update software. In 1993 he fell in love with Linux and has been active in the open source community ever since. Mr. Finley is the Linux Strategist for Argonne National Laboratory, where he is focused on effective use of open source technologies for general IT infrastructure, as well as specific scientific applications. Mr. Finley lives in Naperville, IL, US with his wife, four children, and two large dogs.
Über den Autor Bernard Li:
Bernard has been actively involved with HPC open source software since 2003. He was the chair of the OSCAR project from 2005-2006 and is still an active developer. He is also involved with the SystemImager and Ganglia projects.
He has authored various papers related to OSCAR and HPC in general. His article titled "OSCAR Cluster and Bioinformatics" was published in the November 2004 issue of Linux Journal.
He is interested in all things HPC, from cluster deployment, to cluster file distribution, scheduling systems and ad-hoc grid applications such as Apple's Xgrid.
SystemImager and BitTorrent: a peer-to-peer approach to large scale OS deployment
von Erich Focht (NEC HPC Europe), Andrea Righi (CINECA), Brian Finley (Argonne National Laboratory), Bernard Li (Pharma startup in Bay Area)
Donnerstag, 31.05.2007, Saal 4: Kaiserslautern , 11:00-12:00 Uhr
This paper describes the integration of BitTorrent as a transport method to deploy OS images to a large number of clients using SystemImager (http://www.systemimager.org). Traditionally, the classical client-server model does not scale well, thus large deployments, such as big clusters or complex grid-computing environments, need to be staged. An alternative scalable and reliable method for image distribution needs to be implemented and BitTorrent seems to be a natural fit. Measurements with big clusters of hundreds of nodes show that the SystemImager BitTorrent deployment scales as expected and dramatically exceeds the performance of the existing multicast and rsync deployment methods. Experimental results of deployment time and bandwidth analysis on a large installation are documented in the paper to confirm the advantages of the peer-to-peer approach with respect to the other transports. The speed of the deployment and the support for heterogenous harware provides the means to deploy virtual clusters which are built on the same resource pool, where it is possible to dynamically re-initialize workspaces on user demand.
Über den Autor Erich Focht:
- studied physics at the RWTH Aachen
- PhD work in theoretical Physics (computational methods) at RWTH Aachen and von Neumann Institute for Computing (NIC) Juelich
- started to work for NEC as Application Analyst in 1996, tuning of numerical algorithms and CAE simulation codes for parallel vector supercomputers
- worked as Linux kernel developer for IA64 and NUMA systems, involved in ATLAS64 and LSE projects aiming at scalability of Linux on large systems
- currently working as solution architect, doing research and development for cluster and grid software
- open source projects involvement: OSCAR (currently chair), SystemImager, SystemInstaller, SystemConfigurator, Ganglia, Kerrighed SSI
Über den Autor Andrea Righi:
Andrea Righi was born in Colle di Val d'Elsa a wonderful town in the hearth of Tuscany (Italy) on 19th March 1978. He received the Laurea degree in Computer Engineering at University of Siena on 2004. Currently he is a system administrator in CINECA (http://www.cineca.it/en/index.htm), the most important supercomputing centre in Italy. He is also involved in the deployment and maintenance of the DEISA Supercomputing Grid Infrastructure (https://www.deisa.org), the European grid. Andrea is one of the active core developers of SystemImager (http://wiki.systemimager.org), a popular tool for installing and updating Linux clusters. He is an activist of OSS and a Linux user since 1999.
Über den Autor Brian Finley:
Brian Elliott Finley has specialized in operating systems, system architecture, and system integration since 1990. He holds a number of technical certifications, writes articles for industry publications, and is the creator of SystemImager, the popular Linux installation and update software. In 1993 he fell in love with Linux and has been active in the open source community ever since. Mr. Finley is the Linux Strategist for Argonne National Laboratory, where he is focused on effective use of open source technologies for general IT infrastructure, as well as specific scientific applications. Mr. Finley lives in Naperville, IL, US with his wife, four children, and two large dogs.
Über den Autor Bernard Li:
Bernard has been actively involved with HPC open source software since 2003. He was the chair of the OSCAR project from 2005-2006 and is still an active developer. He is also involved with the SystemImager and Ganglia projects.
He has authored various papers related to OSCAR and HPC in general. His article titled "OSCAR Cluster and Bioinformatics" was published in the November 2004 issue of Linux Journal.
He is interested in all things HPC, from cluster deployment, to cluster file distribution, scheduling systems and ad-hoc grid applications such as Apple's Xgrid.
The Nokia N9 is a lovely phone, but unix packages aren't yet available for it. Further, the Aegis security framework locks the user out of doing all sorts of neighborly things. Rather than wait for individual packages to be made available--or for the Aegis-neutered kernel to be ported--I instead built NetBSD's pkgsrc.
My build chain consists of qemu-system-arm with Debian Lenny and pkgsrc, the latter targeting /opt/pkgsrc. This is then rsync'ed to a server and then to my workstation, where the Harmattan fork of dpkg-deb produces a signed package of the entire installation. Emacs, perl, git, subversion, and all the other niceties of a real unix system come over without any need to neuter Aegis. As everything stays within its own path, I don't have to worry about package conflicts.
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
The Nokia N9 is a lovely phone, but unix packages aren't yet available for it. Further, the Aegis security framework locks the user out of doing all sorts of neighborly things. Rather than wait for individual packages to be made available--or for the Aegis-neutered kernel to be ported--I instead built NetBSD's pkgsrc.
My build chain consists of qemu-system-arm with Debian Lenny and pkgsrc, the latter targeting /opt/pkgsrc. This is then rsync'ed to a server and then to my workstation, where the Harmattan fork of dpkg-deb produces a signed package of the entire installation. Emacs, perl, git, subversion, and all the other niceties of a real unix system come over without any need to neuter Aegis. As everything stays within its own path, I don't have to worry about package conflicts.
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
The Nokia N9 is a lovely phone, but unix packages aren't yet available for it. Further, the Aegis security framework locks the user out of doing all sorts of neighborly things. Rather than wait for individual packages to be made available--or for the Aegis-neutered kernel to be ported--I instead built NetBSD's pkgsrc.
My build chain consists of qemu-system-arm with Debian Lenny and pkgsrc, the latter targeting /opt/pkgsrc. This is then rsync'ed to a server and then to my workstation, where the Harmattan fork of dpkg-deb produces a signed package of the entire installation. Emacs, perl, git, subversion, and all the other niceties of a real unix system come over without any need to neuter Aegis. As everything stays within its own path, I don't have to worry about package conflicts.
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to build a network attached storage (NAS) server with Openfiler
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to create a cloud-based encrypted file system on Linux
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to allows incremental file sync for many users on Linux
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to set up a cross-platform backup server on Linux with BackupPC
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to set up a cross-platform backup server on Linux with BackupPC
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
how to exclude certain files in grsync.
create a file called "exclude" in the .grsync folder (this was in the home directory for me: ~/.grsync
Fill that fill with things you want to exclude. This is what mine looks like:
.Trash
.cache
Dropbox/.dropbox
640x480 timelapse video made from JPGs.
This script also makes a single-pixel plain PGM for overall brightness evaluation. That can be graphed rather than made into a movie. Stay tuned.
#!/bin/bash
#
# Often want to invoke as, say,
# echo "timepic.sh" | at 05:30
while true; do
date=`date +%Y%m%d-%H%M%S`
image=$date-capture0-fs.jpg
data=$date-data0-fs.pgm
echo working on $image
LD_PRELOAD=/usr/lib/arm-linux-gnueabihf/libv4l/v4l1compat.so fswebcam -v \
--device v4l2:/dev/video0 -r 640x480 --skip 2 --frames 3 --save $image
mogrify -rotate 270 $image
convert -compress none $image -resize 1x1 $data
rsync -v --remove-source-files $image $data tai@shifter:/home/tai/Downloads
# Stop and beep if this happens:
# No space left on device (28) Whooops!
sleep 24
done
cat 20130207-1[78]*.jpg | ffmpeg -f image2pipe -r 15 -vcodec mjpeg -i - -vcodec libx264 video.avi
20-01-10 Dan has just bought a little video camera because he is going bungee-jumping on Saturday. I work with Dan, he is a good workmate and an even better friend. I told Dan he should gaffa-tape the little video camera to his hand when he jumps off the platform so he can capture the feeling of free fall but I don't think he will. I went out shooting on my way to work today. I was trying out some new ideas and subjects within the general vicinity of my flat. When I got into work Dan was filming me coming in, so I returned the favour and shot off a couple of quick frames. When I looked at the LCD screen I knew I had my shot for today. I processed this in Silver Efex Pro (again) and gave it some extra contrast in Lightroom as well as a square crop.
Manual Page Read: Page 44 - Shooting an image without camera shake.
Images Viewed: Prophecy Blur (sorry I don't know your real name mate), as well as having an awesome 365 on the go, turned me on to another awesome photographer Annemie Hiele in yesterday's comments. I figured both you you should get a mention in today's post. If you have any good recommendations please let me know in the comments.
Other Inspiration: I spent most of this morning setting up a backup solution for my wife. I used rsync, cron and rsa shared keying to accomplish this and It bent my head. Oh well it's done now.
On Twitter.
My 365 blog - greg365.mcmull.in
Two fresh LMDE installs. Rsync from my desktop PC, since my file structure is the same across all my computers except my win 7 work PC (T510) underneath the T400. The netbook is an Acer Aspire One D250. The desktop is a black box.
How to set up a cross-platform backup server on Linux with BackupPC
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to set up a cross-platform backup server on Linux with BackupPC
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to set up a cross-platform backup server on Linux with BackupPC
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
Macmini Intel CoreDuo 1.66 + 2x500Gb Lacie mini drives.
This macmini is doing clients backup via rsync
This is /NOT/ fiction.. have a look at the screenshot to know more ..
( and yes install it! )
cowsay(1)
cowsay(1)
NAME
cowsay/cowthink - configurable speaking/thinking cow (and a bit more)
SYNOPSIS
cowsay [-e eye_string] [-f cowfile] [-h] [-l] [-n] [-T
tongue_string] [-W column] [-bdgpstwy]
DESCRIPTION
Cowsay generates an ASCII picture of a cow saying something
provided by the user. If run with no arguments, it accepts standard
input, word-wraps the message given at about 40 columns, and prints
the cow saying the given message on
standard output.
To aid in the use of arbitrary messages with arbitrary
whitespace, use the -n option. If it is specified, the given message
will not be word-wrapped. This is possibly useful if you want to make
the cow think or speak in figlet(6). If
-n is specified, there must not be any command-line arguments
left after all the switches have been processed.
The -W specifies roughly (where the message should be wrapped.
The default is equivalent to -W 40 i.e. wrap words at or before the
40th column.
If any command-line arguments are left over after all switches
have been processed, they become the cow's message. The program will
not accept standard input for a message in this case.
There are several provided modes which change the
appearance of the cow depending on its particular emotional/physical
state. The -b option initiates Borg mode; -d causes the cow to appear
dead; -g invokes greedy mode; -p causes a
state of paranoia to come over the cow; -s makes the cow appear
thoroughly stoned; -t yields a tired cow; -w is somewhat the opposite
of -t, and initiates wired mode; -y brings on the cow's youthful
appearance.
The user may specify the -e option to select the appearance of
the cow's eyes, in which case the first two characters of the argument
string eye_string will be used. The default eyes are 'oo'. The
tongue is similarly configurable
through -T and tongue_string; it must be two characters and
does not appear by default. However, it does appear in the 'dead' and
'stoned' modes. Any configuration done by -e and -T will be lost if
one of the provided modes is used.
The -f option specifies a particular cow picture file
(``cowfile'') to use. If the cowfile spec contains '/' then it will
be interpreted as a path relative to the current directory.
Otherwise, cowsay will search the path specified in
the COWPATH environment variable. To list all cowfiles on the
current COWPATH, invoke cowsay with the -l switch.
If the program is invoked as cowthink then the cow will think
its message instead of saying it.
COWFILE FORMAT
A cowfile is made up of a simple block of perl(1) code, which
assigns a picture of a cow to the variable $the_cow. Should you wish
to customize the eyes or the tongue of the cow, then the variables
$eyes and $tongue may be used. The
trail leading up to the cow's message balloon is
composed of the character(s) in the $thoughts variable. Any
backslashes must be reduplicated to prevent interpolation. The name
of a cowfile should end with .cow, otherwise it is
assumed not to be a cowfile. Also, at-signs (``@'') must be
backslashed because that is what Perl 5 expects.
COMPATIBILITY WITH OLDER VERSIONS
What older versions? :-)
Version 3.x is fully backward-compatible with 2.x versions. If
you're still using a 1.x version, consider upgrading. And tell me
where you got the older versions, since I didn't exactly put them up
for world-wide access.
Oh, just so you know, this manual page documents version 3.02 of cowsay.
ENVIRONMENT
The COWPATH environment variable, if present, will be used to
search for cowfiles. It contains a colon-separated list of
directories, much like PATH or MANPATH. It should always contain the
/usr/local/share/cows directory, or at least
a directory with a file called default.cow in it.
FILES
/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_textproc_cowsay/work/destroot/opt/local/share/cows
holds a sample set of cowfiles. If your COWPATH is not explicitly
set, it automatically
contains this directory.
BUGS
If there are any, please notify the author at the address below.
AUTHOR
Tony Monroe (tony@nog.net), with suggestions from Shannon Appel
(appel@CSUA.Berkeley.EDU) and contributions from Anthony Polito
(aspolito@CSUA.Berkeley.EDU).
SEE ALSO
perl(1), wall(1), nwrite(1), figlet(6)
$Date: 1999/11/04 19:50:40 $
cowsay(1)
--
Manish Chakravarty
Blog: manish-chaks.livejournal.com/
LinkedIn: www.linkedin.com/in/manishchakravarty
Twitter: twitter.com/ManishChaks
Facebook: www.facebook.com/manish.chakravarty
How to set up a cross-platform backup server on Linux with BackupPC
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to set up a cross-platform backup server on Linux with BackupPC
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com
How to set up a cross-platform backup server on Linux with BackupPC
If you would like to use this photo, be sure to place a proper attribution linking to xmodulo.com