python 实现的IP 存活扫描脚本

下载地址

ActiveOrNot

用于处理 oneforall 等子域名扫描工具的结果去重 + 主机存活扫描

参数

-f --file  指定存放ip或子域名的文件,默认 ip.txt
-t --thread  设置线程数,默认 50

python3 ActiveOrNot.py -f ip.txt -t 12

具体代码 ActiveOrNot.py

from threading import Thread
from queue import Queue
import requests
from time import time
import argparse

headers = {
 "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36"
}


def ping(url, new_ip):
 url = url.strip()
 if (not url.startswith('http://')) and (not url.startswith('https://')):
  url = 'http://' + url
 try:
  req = requests.get(url, headers=headers, timeout=2)
  new_ip.put(url + ' -- ' + str(req.status_code))
  print("%s 存活" % url)
 except:
  print("%s 不存活" % url)


def new_list(file):
 with open(file, 'r') as f:
  new_ip = []
  ip_list = f.readlines()
  for ip in ip_list:
   ip = ip.strip().replace('http://', '').replace('https://', '')
   if ip:
    if not (ip in new_ip):
     new_ip.append(ip)
  return new_ip


def main(file, th):
 begin_time = time()
 new_ip = Queue()
 ip_list = new_list(file)
 j = 0
 length = len(ip_list)
 while j < length:
  threads = []
  for i in range(th):
   t = Thread(target=ping, args=(ip_list[j], new_ip))
   t.start()
   threads.append(t)
   j += 1
   if j == length:
    break
  for thread in threads:
   thread.join()
 with open('NewIP.txt', 'a+') as nf:
  while not new_ip.empty():
   nf.write(new_ip.get()+'\n')
 end_time = time()
 run_time = end_time - begin_time
 print("总共耗时 %s 秒"% run_time)


if __name__ == '__main__':
 parser = argparse.ArgumentParser(description='url active scan')
 parser.add_argument("-f", "--file", help="指定文件", default='ip.txt')
 parser.add_argument("-t", "--thread", help="设置线程", default=50)
 args = parser.parse_args()
 file = args.file
 th = args.thread
 main(file, th)

以上就是python 实现的IP 存活扫描脚本的详细内容,更多关于python ip存活扫描的资料请关注呐喊教程其它相关文章!

声明:本文内容来源于网络,版权归原作者所有,内容由互联网用户自发贡献自行上传,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任。如果您发现有涉嫌版权的内容,欢迎发送邮件至:notice#nhooo.com(发邮件时,请将#更换为@)进行举报,并提供相关证据,一经查实,本站将立刻删除涉嫌侵权内容。