Jesús Tejedor

Jesús Tejedor

Currently working on Telefónica Tech's DRP team as a Cyber Intelligence Analyst. Expert in digital surveillance, cyber threat detection, and OSINT investigations. I am passionate about online fraud and cybercrime trends.

Cyber Security
Kali GPT: the AI assistant for automation and analysis in cyber security
In an era where cutting-edge AI tools are emerging to enhance the work of cyber security analysts, it’s becoming increasingly difficult to identify which ones deliver realistic and applicable value in professional environments. This is where Kali GPT comes into play, a tool that integrates advanced language models, such as GPT-4, and aligns with offensive security and pentesting workflows. Although it can be run within Kali Linux, it’s important to clarify that Kali GPT is not an official Kali project tool nor is it developed by Offensive Security. This has implications in terms of support, warranty, and maintenance, prompting the question: is it truly a disruptive innovation, or just a tailored interface in disguise? Unlike traditional chatbots, Kali GPT doesn’t just answer questions: it executes real commands, interprets outputs, and automates OSINT and pentesting tasks. Standing out from the crowd As a technical assistant, Kali GPT’s main objective is to provide natural language-driven support and enable direct execution of tools like nmap, whois, amass, dnsrecon, theHarvester, among many others, all in real time. The tool can be installed locally, or even used through themed assistants in ChatGPT to learn and simulate workflows. This approach turns the Linux terminal into an intelligent copilot, offering a range of capabilities including: Contextual command assistance: It interprets technical queries and provides tailored responses, suggesting specific commands, tools, and outcomes. Automation and script generation: It generates Bash or Python scripts for reconnaissance, scanning, or exploitation tasks. It can also automate the creation of payloads, reverse shells, or custom scripts based on detected vulnerabilities. Integration with OSINT tools and external platforms: It combines data from sources like Shodan, VirusTotal, and vulnerability databases. Continuous updates and learning: Keeps up with changes in the Kali ecosystem and provides advanced suggestions for pentesters. Kali GPT understands your request, runs the appropriate commands, captures the output, and delivers an actionable summary. AI-powered OSINT Among Kali GPT’s many capabilities, this article focuses on its applications within open-source intelligence (OSINT). Here are several use cases where it can boost efficiency: Fraudulent domain detection: Helps identify patterns in domain names, automates searches to detect typosquatting, checks blacklists, and summarizes key findings. It also discovers subdomains, though it has limitations when pivoting to other domains within the same fraud campaign. IP reputation analysis: A strong point — it queries public threat intelligence databases, classifies IPs by threat type, and correlates hosted domains with threat actors. Unlike standard chatbots, it can yield useful results for malicious IPs. > Analyze IP 45.83.xx.x. Check if it appears on blacklists, what malicious activity it's linked to (spam, C2, brute force…), which ports are open, and whether it’s been recently reported. Summarize the threat level and whether it should be blocked. Example results from Kali GPT for a malicious IP: Threat type: Ransomware – Double extortion. First seen: March 2025. Tactics: Data exfiltration, encryption using .nspire extension. Tools used: WinSCP, MEGAcmd, 7-Zip, Everything.exe IP: 14.1X9.1XX.6X – Server used for data exfiltration via SCP/SFTP SHA-256 hashes: 35cefe4bc4aXXXad73dda444ac700aac9, f749efde8f9dXXXX .onion addresses (leak sites): ● nsXXXzmvapgiwgtuoznIafqvlyz7ey6himtgn5bXXXXfyto3yryd.onion ● a2XXX74tlgz4fk3ft4akolaXXpfrzk772dk2XX32cznjsmzpangd.onion Threat intelligence (CTI): Correlates indicators of compromise (IoCs) from open sources with tactics from known APT groups. It can even map activity to the MITRE ATT&CK framework, although it struggles to contextualize certain recent IoCs. Social media profile investigations: While it can detect profiles sharing the same alias, its ability to correlate identities across platforms is inconsistent, and it struggles to identify fraudulent content spread across multiple accounts. > Investigate the alias jesusx91 on social media. Tell me which platforms it’s active on, whether the profiles seem real or fake, and what public data can be extracted: location, photo, bio, activity patterns, or external links. Fraud investigation: Depending on the type of scam, it can evaluate if a domain impersonates a brand by analyzing its structure, certificates, headers, and comparing it with the legitimate site. Most notably, it identifies common patterns across fraudulent sites to expand the investigation. Vulnerability detection: Identifies all vulnerable service versions and links them to known CVEs, available exploits, and existing patches, filtering by severity score. Across all these use cases, the key advantage is the automation of multiple tools and guided interpretation, significantly saving analysts’ time. ■ On the flip side, it has some limitations: it requires the user to set up API keys for external platforms, can’t bypass CAPTCHA in manual social media searches, and may occasionally return false positives. Is Kali GPT just another chatbot? Unlike generic conversational agents such as 'OSINT GPT' or 'Intel Sourcing Agent', which can suggest commands or explain concepts, Kali GPT stands out by executing real tools on active networks. It runs actual terminal commands. Interprets outputs with technical awareness (not just superficial explanations) Automates OSINT workflows without requiring scripting from the analyst. Operates locally, enhancing both privacy and performance. In this sense, it resembles more an intelligent orchestration layer inside Kali Linux than a traditional chatbot. ■ Kali Linux remains the core operating environment, hosting the tools, configurations, and capabilities for pentesting, forensics, OSINT, etc. Kali GPT, meanwhile, could serve as a smart interface between the analyst and the system, enabling conversational operation. Strengths and shortcomings: critical aspects to consider Despite its advantages, Kali GPT has received some criticism and raised several concerns: It's paid: Unlike the free Kali Linux, Kali GPT requires a subscription, limiting adoption in educational or personal contexts. Relies on pre-installed commands: While it acts as a copilot, it still depends on the availability and currency of the underlying tools. Not a fully trained LLM: According to criticism, it runs a standard GPT model with an embedded book as its main knowledge base and an unknown developer. The AI sold here is geared towards technical flows but lacks GPT-4's depth in textual or semantic reasoning. Overhyped as revolutionary: Some publications point out that it has been widely promoted on social media as an advanced AI, when in reality, it lacks genuine learning capabilities — and that better AI assistants exist without such publicity. Conclusion Kali GPT represents a middle ground between human analysts and intelligent automation. It's not a classic chatbot or just a web interface: it's an operational copilot that speaks our language and acts in our terminal. Its value lies in lowering technical barriers, saving time, contextualizing data, and enabling less experienced users to perform complex tasks. It can be particularly impactful in OSINT research and information gathering processes. That said, it should be used wisely, with an understanding of its limitations, ethical considerations, and the fact that it is not an official part of the Kali Linux ecosystem. Looking ahead, we’re likely to see more tools like this: assistants that not only suggest, but act. And in that future, we’ll need to rethink what the role of the analyst will look like in the years to come. Cyber Security A dangerous alliance: the new Dark Web + AI marketplace June 5, 2025
July 9, 2025
Cyber Security
A dangerous alliance: the new Dark Web + AI marketplace
In a world where Artificial Intelligence is reshaping entire industries and the way we work, its most unsettling impact is happening far from the public eye: within the Dark Web ecosystem. What was once an anonymous space for sharing and selling information in underground forums with certain illicit activities has now become a highly specialized marketplace, equipped with all kinds of AI tools to automate attacks and make the criminal market more sophisticated. AI has transformed the cybercrime landscape. In just one year, mentions and sales of AI-powered tools on the Dark Web have surged by over 200%. Identity fraud scams have quadrupled, and 73% of companies worldwide have experienced some form of AI-related data breach. What once required advanced technical expertise can now be done with simple natural language prompts: just pay a monthly subscription! Could you tell the difference between your boss’s real voice and an AI-cloned version asking for urgent access to critical systems? However, it's important to note that, according to the Adversarial Misuse of Generative AI report by Google Threat Intelligence, AI has not yet been used to develop entirely new cyberattack capabilities. Instead, it has mainly been used to increase productivity and automate basic tasks. The role of AI in the black market The biggest shift in the ecosystem has been the integration of AI to automate and scale different types of fraud: Advanced identity impersonation. Language models similar to ChatGPT are now used to craft hyper-realistic phishing messages, personalized using real data gathered from social networks and major data leaks. What once required advanced social engineering skills is now accessible to virtually anyone, thanks to tools like FraudGPT, enabling scammers to generate malicious code and orchestrate convincing fraud campaigns with no prior knowledge. This "Phishing as a Service" model is subscription-based. Think Netflix—but for illegal purposes. Financial deepfakes. AI has vastly improved the creation of synthetic video and audio. By cloning voices and faces of senior executives, attackers can produce convincing media used to authorize multimillion-dollar transfers or manipulate critical decisions. The problem goes beyond audiovisual forgeries—AI-generated legal document forgery, including signatures, is becoming a much bigger trend and is openly traded. Automated attacks: Tools based on open-source language models—dubbed “Generative AI malware” like WormGPT, DarkBERT, or EvilGPT—are able to learn and modify their malicious code using AI, evading traditional antivirus detection. In 2025, this trend has evolved with the rise of Xanthorox, a platform that goes beyond simple natural language model variants. It's a system built from scratch with a modular architecture of specialized AI models, hosted on private servers. Its capabilities include automated malicious code generation, vulnerability exploitation, and voice-command attack execution. Its ability to operate offline and conduct real-time searches across 50+ engines makes it an extremely autonomous and dangerous tool. AI bots on messaging platforms: the new face of mass fraud: In easily accessible spaces like Telegram channels, AI powers bots that support large-scale illicit activity: from selling illegal access credentials and card cloning, to forging documents or simulating tech support chats to trick victims. The alarming part is how easily these sophisticated attacks are launched through everyday, accessible interfaces. A single bot can process 36,000 transactions per second — matching the traffic volumes of leading e-commerce platforms on Black Friday. Market distribution: AI as a black-market commodity The distribution of “malicious” tools on the Dark Web is becoming increasingly sophisticated, inspired by traditional e-commerce models. Underground forums, the cybercriminals’ university: According to a report by Kaspersky, over 3,000 posts were detected where cybercriminals discussed how to modify language models (LLMs) for malicious use. They often share AI scripts, examples, and step-by-step guides. Subscription-based marketplaces: On marketplaces, tools like FraudGPT are rented for €170 per month or €1,500 per year. Some all-in-one kits exceed €4,000 and include tech support and updates. Stolen accounts: Premium access to AI platforms like ChatGPT is sold for €8 to €500, depending on usage limits. Automated services can generate up to 1,000 fake accounts per day using stolen personal data. Covert advertising: Bots on platforms like Telegram and Discord promote these tools with test messages to make them easily accessible to anyone. Intermediaries and specialists: crime gets organized too The underground market isn’t pure chaos—it’s structured. The so-called Initial Access Brokers (IABs) are intermediaries or groups specialized in infiltrating corporate networks, then selling access to ransomware groups. They have even improved their methods using AI to clean, validate, and classify stolen databases, ensuring that the data they sell is actually useful. In the Ransomware as a Service (RaaS) model, roles are clearly defined and highly professionalized. Access can be sold for anywhere between €400 and €10,000, depending on the target’s size and value. Meanwhile, the trade of compromised credentials has become an extremely lucrative Dark Web market—and it’s still growing. Millions of user records, passwords, and email addresses are available for sale. For instance, in Spain alone, 33 leaks were detected in forums during the first quarter of 2025, affecting critical sectors such as government, transportation, energy, and industry. Mapping the hidden market with OSINT From a monitoring and analysis perspective, OSINT techniques are key to systematically exploring Dark Web content and maximizing useful intelligence. The approach begins with tracking [.]onion domains via specialized search engines, followed by automated scanning in secure environments. Primary targets include detecting stolen credentials, exposed vulnerabilities, data leaks, or active fraud campaigns. Systematic collection of this information enables not only real-time incident mapping but also the ability to anticipate threat actor movements and identify attack patterns. At this stage, the application of AI could make a significant difference—using natural language processing (NLP) techniques to analyze large volumes of data from messaging channels, understand the context of conversations, and filter relevant mentions, even when written in different languages or specialized slang. By cross-referencing data, we can correlate actors, understand their modus operandi, and generate early warnings. Protecting against digital cybercrime It is essential to adopt a proactive approach that combines technology, education, and governance. For example, organizations must implement AI-based solutions to detect and respond to threats in real time, alongside continuous risk monitoring. Cyber intelligence can also play a key role by detecting attack patterns and suspicious activities within digital environments or the Dark Web. In this context, investing in digital education is crucial: it’s no longer enough to recognize phishing emails—users must now be trained to detect advanced scams like deepfakes or cloned audios that mimic human conversations with surprising realism. ■ Be wary of “premium AI” deals on shady marketplaces. What seems like a bargain could be a trap! Cyber Security Dark Markets in the internet age May 9, 2022
June 5, 2025
Cyber Security
Protect your credentials: Job offer scams
The rise of online fraud is becoming more and more common in the digital age, whether in social networks, online shopping, emails, or job offers. Any activity of a person on the Internet involves exposing certain data, and there is a race to the bottom among cybercriminals to steal all kinds of information, such as passwords and personal data. The theft of credentials has become the gold mine of the cybercrime business. This phenomenon arises from the ease with which cybercriminals gain access to compromised accounts, establishing itself as a gateway for phishing and targeted attacks on businesses. What happens if your credentials are compromised, and what is the risk of your passwords or confidential data ending up in unwanted hands? Personal data breaches or digital security are often underestimated, but would you worry just as much if you lost your house or car keys? When cybercriminals get hold of your passwords, they check if you use the same one in all applications and social networks. Protect your data like you protect your keys! Once the credentials have been stolen, cybercriminals opt for different actions of use: they sell the credentials in underground forums or through Telegram, carry out impersonation attacks or execute access attacks on systems to get into organizations, among other attacks. This is reflected in our report on cyber threats in 2023, which warns of the buying and selling of credentials as one of the main ways to compromise corporate networks. The scale of the business is so large that there are actors known as initial access brokers (IABs). These are criminal groups engaged in selling credentials and all kinds of illegitimate information on Deep Web forums. These forums have a direct impact on data breaches and cyberattacks, as ransomware groups resort to these marketplaces to trade credentials and fulfill their criminal activities. There are different techniques to steal user credentials, such as phishing campaigns, social engineering attacks, malware programs, "infostealers" or exploits. Deception through social engineering in social networks is the most common, and one of the most recently used platforms to obtain credentials is the social network Linkedin. Linkedin: Fake recruiter profiles In the realm of social networks, cybercriminals are employing different and increasingly sophisticated manipulation techniques to deceive their victims, becoming more and more difficult to identify as fraudulent. This challenge manifests itself on the Linkedin platform, where there is a proliferation of fake profiles of supposed recruiters, attractive fake job offers and account spoofing in order to obtain confidential and user data. The success of these malicious actors is evident, as LinkedIn is one of the most impersonated brands in the world, according to Check Point Research's (CPR) 2023 Brand Phishing Report. The recently used modus operandi is impersonation campaigns on recruiter profiles. Cybercriminals use these profiles to contact potential victims, offering them attractive job offers via direct messages, containing malicious links or inviting them to request more information via chat. ⚠️ The goal of these fake recruiters is to obtain victims' credentials, as well as to distribute malware on mobile devices to execute malicious code. This tactic is distinguished by sending personalized messages to candidates, without misspellings to deceive job seekers or people with job aspirations. How do fake LinkedIn recruiters work? Malicious actors use legitimate Linkedin recruiter profiles, profiles that have been compromised before or obtained from some data leak (e.g., the 2023 leak of 35 million Linkedin users). At other times, they create fake profiles or bots to extend the fraud scheme and generate credibility. They use sophisticated fraud structures with many fake profiles, which allow sharing publications of the alleged company to simulate legitimacy. Attack targeting professional profiles, focusing on a specific sector and professional profile to carry out a massive campaign. Preparation and sending of messages via private chat, accompanied by an image that simulates a PDF or Word document with the details of the job offer. But beware, it's just the bait with the malicious link! Cybercriminal groups initiate an exchange of messages that look real in order to gain the victim's trust. One of the ways to detect and combat scams is through the use of OSINT (Open-Source Intelligence) techniques, which consists of the analysis of open source information and facilitates fraud detection. Let's take a look at an example below: Example of fake job offer on Linkedin. Own source. It all starts with a private message from a supposed recruiter. At first it looks like a legitimate message and without any pattern of suspicion: there are no spelling errors, the recruiter's profile is detailed, provides contact information of the company, and attaches a personalized document of the job offer for the candidate. When looking at the recruiter's profile and suspecting a possible scam, a simple “Google Lens” search of the recruiter's profile image can be performed. ⚠️ When reviewing the image of the supposed IT recruiter who calls herself Marta, surprisingly she also appears as Nicole, Alaska, Anna, and Celina, and the curious thing is that she works in different companies and locations around the world! Fraudsters use both stolen images and images created by Artificial Intelligence, and the sample is that during the year 2022 thousands of profiles were identified on Linkedin that were created by AI faces. Another pattern of suspicion is the sending of a PDF or Word document, as clicking on the link to the juicy "job offer" displays a web page that simulates access to Linkedin, granting credentials to the malicious actors. Copy the link of the supposed document instead and enter it on a web page that confirms if it is safe. There are different tools to analyze the link obtained, such as filescan, which helps us to confirm if it is malicious and definitively report the scam. “Image: Example of malicious link in Filescan. Own source.” Despite LinkedIn adding new security features in October 2022 to combat fake profiles, cybercriminals continue to update their modus operandi to evade detection and reach more victims in the shortest possible time. This is where cyber intelligence plays a key role in protecting data. From Telefónica Tech's DRP team, suspicious activities are detected early. By monitoring and integrating intelligence across multiple platforms, the risk of compromised credentials can be mitigated, and threats can be dealt with quickly. Recommendations to avoid scams in job offers Be wary of suspicious messages: If you receive a private message that seems unusual, do not click on any link, check the recruiter's profile and activity on the social network before taking any action. Keep an eye on friend requests: Ask their motives and check their communication channels before accepting. Check the privacy settings of the social network: display the necessary information to keep your data protected. Use strong passwords and enable two-factor authentication. Verify the legitimacy of the company: Investigate if the company really exists and do not hesitate to call to confirm that they are looking for employees. Protect yourself to keep your information safe in the digital world, security is in your hands! Cyber Security Security and privacy: difference and impact on reputation January 24, 2024 Image by Sergeycauselove at Freepik.
March 18, 2024