I thought this might come in handy while working with large sets of data as I do. It’s a simple shell script that takes a text file as an argument and returns each unique ‘word’ and the total occurences of that word(‘frequency’) on individual lines.
Remember to chmod +x or it won’t run.
# wf: Crude word frequency analysis on a text file.
# Check for input file on command line.
if [ $# -ne "$ARGS" ]
# Correct number of arguments passed to script?
echo "Usage: `basename $0` filename"
if [ ! -f "$1" ] # Check if file exists.
echo "File \"$1\" does not exist."
cat "$1" | xargs -n1 | \
# List the file, one word per line.
tr A-Z a-z | \
# Shift characters to lowercase.
sed -e 's/\.//g' -e 's/\,//g' -e 's/ /\
/g' | \
# Filter out periods and commas, and
#+ change space between words to linefeed,
sort | uniq -c | sort -nr
# Finally prefix occurrence count and sort numerically.