Scriptilitious – A Linux ScriptBox

 

If you need a few handy Bash scripts to do a few, what should be,simple tasks then you’re in the right place. Welcome to Scriptilitious, the Linux scriptbox.

Scriptilitious is a plugin based Linux Bash script manager.

The bash scripts included with Scriptilitious are:

  1. Auto Form Fill
  2. Auto Single Form Fill
  3. File Editor
  4. File Splitter
  5. HTML Parser
  6. Referrer Faker
  7. Site Scan
  8. Sitemap Maker
  9. Sitemap Ripper
  10. URL Extractor
  11. URL2Hyperlink

The scripts work on any Linux system that has sed, awk, grep, wget, cURL, sort and expand installed in it. Each script is written in Bash and should work on any terminal that understands Bash.

The Scriptilitious project is hosted both here at JournalXtra and Sourceforge.net.

Installation is not required. Just download the zip file, extract it and read the readme file.

Instructions

Scriptilitious is simple to use:

  1. Download the zip file
  2. Extract it
  3. Open the Scriptilitious directory
  4. Open a terminal in the Scriptilitious directory
  5. Type bash ../script* into the terminal and press enter
  6. Follow the on screen instructions
  7. Any files to be worked on or used by Scriptilitious should be placed in the Scriptilitious/WorkBox directory.

If you encounter any problems getting Scriptilitious to run then you will likely only need to change the file permissions of scriptilitious.sh to ‘executable’

Post any errors in the comments section.

A Few Definitions

The following terms will be used with the following definitions throughout this support page for Scriptilitious:

  • TLD
  • URL
  • Scriptilitious
  • ScriptBox

The TLD (Top Level Domain) is the bit that comes between the dot and the forward slash (if any) that usually follows the letters that follow the dot. For example, .com and .org are both TLDs and a domain name like http://JournalXtra.com/ has the TLD of .com.

The URL is the full address of a file located on a web server. For example https://journalxtra.com/index.php is a full URL to the file index.php.

Scriptilitious is the name of this project and the collection of scripts that will be described below and the wrapper they plug into.

ScriptBox is the name of the wrapper that makes the scripts in Scriptilitious accessible. It forms the menu.

Auto Form Fill

The most recent addition to Scriptilitious, Auto Form Fill reads a list of URLs and processes a specified form served at them.

It’s not brilliant but it does its job. It’s ideal for automatically posting comments to blogs, linkdumps, guest books and any other online place that allows visitor comments. It does not currently bypass captchas or other spam prevention systems designed to prevent automatic posting.

Instructions:

  1. Place a list of links into the ScriptBox folder
  2. Run Scriptilitious
  3. Choose menu item 1, auto-form-filler
  4. Select the list of links as the input file
  5. Let Auto Form Fill scan the first site in the link list for form data
  6. Follow the on screen prompts to configure Auto Form Fill
  7. Leave Auto Form Fill to visit each website and fill out the specified form

Auto Form Fill uses cURL to fetch a web page and a pearl script (formfind.pl) to find the forms presented at that web page. It requires your input to tell it the name of the page that the form sends form data to; and it needs you to tell it the names of the form fields you want it to auto fill. All information is presented on screen with detailed usage information at each stage.

For example, Auto Form Fill might visit a page that contains an HTML page. The form there has 3 fields and 1 button:

  • Field 1: Name
  • Field 2: Age
  • Field 3: Town
  • Button: Send

The mini-site structure is thus:

  • Form Page: https://journalxtra.com/tutorials/php-data-passing/page-one.html
  • Data Processing Page: https://journalxtra.com/tutorials/php-data-passing/page-two.php

CURL downloads the form page, the one ending in page-one.html. The formfind script reads page-one.html and lists any forms it finds. For example, formfind.pl’s output for page-one.html is:

--- FORM report. Uses POST to URL "page-two.php"
Input: NAME="Name" (TEXT)
Input: NAME="Age" (TEXT)
Input: NAME="Town" (TEXT)
Input: NAME="Form_Submit" VALUE="Send" (SUBMIT)
--- end of FORM

Notice the top line ‘— FORM report. Uses POST to URL “page-two.php”‘. The post to URL is page-two.php (the mini-site’s data processing page). This is the page that cURL needs to send data to.

All the lines that begin with “Input” show you the field names for each input field of the form. Auto Form Fill shows you the formfind script’s output then asks you to tell it the post to URL page and the field names used by the form and your answers to whatever information is being requested by those fields.

When cURL visits page-two.php it sends it your answers and stores the server response in a log file in the WorkBox directory of Scriptilitious. The example page, Page-two.php also logs the information in its own log file on my server.

You can test Auto Form Filler by putting the URL https://journalxtra.com/tutorials/php-data-passing/page-one.html into a site list file in the WorkBox directory of Scriptilitious.

Because of the way the script works, every page it visits should send data to a page of the same name (e.g page-two.php). This is not usually a problem when all the sites in the site list use the same scripts to manage their forms e.g if they are all WordPress blogs that use Disqus.

Auto Form Filler has an option for faking the referer URL given to servers when cURL visits them. It’s not always successful but it’s better nothing.

In a future release, Auto Form Fill will automatically determine the form fields and the correct page or pages to send data too.

I will also adjust the script so it can be used to login to secure sites such as Facebook and Twitter so that status updates and other data can be sent to multiple secure sites.

Auto Single Form Fill

This works like Auto Form Fill but allows only one form to be completed but with multiple data inputs. For example, you can fill in a form 100 times but use a data file to provide the data to be sent to the form. In other words, you can fill a form with rotated variable values.

This script will be updated to handle multiple form URLs and random data inputs in the near future.

File Editor

I’ve not tested this in a long time. It could be a project I began but didn’t finish so don’t be surprised if it doesn’t work or do half of what I intended it to  do.

It’s supposed to make working with sed a bit easier by providing a frontend for some of it’s more frequently used operations. Here is what I wrote  it does in the script’s blurb. I might not have gotten around to finishing it:

  • Automates the process of editing the contents of files.
  • Adds content
  • to the beginning of each line
  • to the beginning of a file
  • to the end of each line
  • Deletes content
  • from the beginning of each line
  • from the beginning of a file
  • from the end of each line
  • Extracts lines of data from a file and
  • places it into a new file leaving the original file in tact
  • places it into a new file, places the unextracted data into another file and leaves the unextracted data in the original file in tact

It’s not a project in my immediate interest to complete. I will get around to checking it over in a few months unless I’m paid to complete it ;)

File Splitter

This is a very basic script that provides an intuitive front end for the Linux split command Use this to split large files into smaller chunks. As simple as it is, it is handy in that it takes the headache out of reading the man page for split.

HTML Parser

Purely an outcrop of the site scanner script, this strips regular text contained within an HTML page from all HTML tags and HTML special characters. It is experimental but has been tested. I will go so far as to say that it seems to work. Let me know when you find any bugs.

Its features are:

  1. Converts HTML friendly tags into regular symbols (e.g. " to “)
  2. Converts HTML special characters into regular symbols (e.g. ! to “)
  3. It removes XML tags
  4. It removes HTML tags
  5. It removes comments
  6. It’s valid upto (X)HTML 5
  7. It removes browser specific tags (e.g those for Netscape, IE and Konquorer)

Referer Faker

This is designed to trick websites into thinking they are being visited by surfers who have clicked a link on another website. comes in handy for Black Hat SEO projects. I wrote this as a proof of concept.

To use it, just put a list of URLs into a file in the WorkBox directory then choose that file from within Scriptilitious while running the referer-faker script.

Referer Faker lets you use a proxy server while referer spamming (as it’s known in the businesses).

It uses wget to call websites. I intend to add an option for using either wget or cURL when I next update Scriptilitious.

Site Scan

Ever needed to scan a series of URLs to determine whether the web sites they present contain a specific hyperlink, phrase or other string?

If you you have one or two URLs to check then it’s easy enough to load them in your browser, press ctrl+f and manually search for the phrase you are looking for. If you have a hundred or a thousand URLs to check then it’s much easier to use a URL checking script; and that is exactly what Site Scanner is – it reads a list of URLs from a text file, uses wget to silently download the web pages they present then greps each downloaded page to check for the specified text.

It features:

  1. Option to strip page content from its <body></body> tags
  2. Experimental (seems to work well) (X)HTML parser which can strip all (X)HTML and XML tags from a page
  3. Special HTML character conversion into regular symbols (eg &lt; to < and &gt; to >)
  4. It can search for URLs
  5. It can search for (X)HTML snippets
  6. It can search for lines of regular text

The (X)HTML parser is valid for tags upto (X)HTML 5.

Sitemap Maker

Sitemap Maker converts any list of URLs into an xml sitemap. Some of its plus points are:

  1. extracts all or user specified URLs from the URL list being converted to a sitemap
  2. converts URLs to either their http or www forms according to user preference
  3. converts special characters to their escaped character equivalents
  4. allows a general priority to be stated
  5. allows a general update frequency to be stated
  6. adds or removes URL trailing slashes according to user preference. Leaves them untouched if requested
  7. ensures the URLs in the sitemap are unique

A guide to sitemap creation can be found in the JournalXtra article Crafty Sitemap Building.

Sitemap Ripper

Sitemap Ripper merges two XML sitemaps into one large sitemap. It works pretty much the same way as Sitemap Maker, the only difference being the option to choose the sitemaps to be merged. Keep an eye on th size of the generated sitemap file to ensure it exceeds neither 10mb nor 50,000 URLs. Automatic size warning will be implemented at a future date.

It is usually better to reference multiple sitemaps with a sitemap index and to reference the sitemap index within the host server’s robots.txt file as well as to report and ping the sitemap index to the search engines of your choice. See

Sharing is caring!

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
1
0
Would love your thoughts, please comment.x
()
x