Warning: Undefined array key "HTTP_USER_AGENT" in /home/nginx/domains/linuxwebhostingsupport.in/public/wp-content/plugins/all-in-one-seo-pack/app/Common/Traits/Helpers/ThirdParty.php on line 646

Linux Web Hosting, DevOps, and Cloud Solutions

Empowering you with the knowledge to master Linux web hosting, DevOps and Cloud

 Linux Web Hosting, DevOps, and Cloud Solutions

Tracking File Activity(deletion) with auditd and Process Accounting in Linux

Maintaining a secure system involves monitoring file system activity, especially tracking file deletions, creations, and other modifications. This blog post explores how to leverage two powerful tools, auditd and process accounting with /usr/sbin/accton (provided by the psacct package), to gain a more comprehensive understanding of these events in Linux.

Introduction

Tracking file deletions in a Linux environment can be challenging. Traditional file monitoring tools often lack the capability to provide detailed information about who performed the deletion, when it occurred, and which process was responsible. This gap in visibility can be problematic for system administrators and security professionals who need to maintain a secure and compliant system.

To address this challenge, we can combine auditd, which provides detailed auditing capabilities, with process accounting (psacct), which tracks process activity. By integrating these tools, we can gain a more comprehensive view of file deletions and the processes that cause them.

What We’ll Cover:

1. Understanding auditd and Process Accounting
2. Installing and Configuring psacct
3. Enabling Audit Tracking and Process Accounting
4. Setting Up Audit Rules with auditctl
5. Simulating File Deletion
6. Analyzing Audit Logs with ausearch
7. Linking Process ID to Process Name using psacct
8. Understanding Limitations and Best Practices

Prerequisites:

1. Basic understanding of Linux commands
2. Root or sudo privileges
3. Auditd package installed (installed by default on most of the distros)

1. Understanding the Tools

auditd: The Linux audit daemon logs security-relevant events, including file system modifications. It allows you to track who is accessing the system, what they are doing, and the outcome of their actions.

Process Accounting: Linux keeps track of resource usage for processes. By analyzing process IDs (PIDs) obtained from auditd logs and utilizing tools like /usr/sbin/accton and dump-acct (provided by psacct), we can potentially identify the process responsible for file system activity. However, it’s important to understand that process accounting data itself doesn’t directly track file deletions.

2. Installing and Configuring psacct

First, install the psacct package using your distribution’s package manager if it’s not already present:

# For Debian/Ubuntu based systems
sudo apt install acct

# For Red Hat/CentOS based systems
sudo yum install psacct

3. Enabling Audit Tracking and Process Accounting

Ensure auditd is running by checking its service status:

sudo systemctl status auditd

If not running, enable and start it:

sudo systemctl enable auditd
sudo systemctl start auditd


Next, initiate recording process accounting data:

sudo /usr/sbin/accton /var/log/account/pacct

This will start saving the process information in the log file /var/log/account/pacct.

4. Setting Up Audit Rules with auditctl

To ensure audit rules persist across reboots, add the rule to the audit configuration file. The location of this file may vary based on the distribution:

For Debian/Ubuntu, use /etc/audit/rules.d/audit.rules
For Red Hat/CentOS, use /etc/audit/audit.rules
Open the appropriate file in a text editor with root privileges and add the following line to monitor deletions within a sample directory:

-w /var/tmp -p wa -k sample_file_deletion
Explanation:

-w: Specifies the directory to watch (/path/to/your/sample_directory: /var/tmp)
-p wa: Monitors both write (w) and attribute (a) changes (deletion modifies attributes)
-k sample_file_deletion: Assigns a unique key for easy identification in logs


After adding the rule, restart the auditd service to apply the changes:

sudo systemctl restart auditd

5. Simulating File Deletion

Create a test file in the sample directory and delete it:

touch /var/tmp/test_file
rm /var/tmp/test_file

6. Analyzing Audit Logs with ausearch

Use ausearch to search audit logs for the deletion event:


sudo ausearch -k sample_file_deletion
This command will display audit records related to the deletion you simulated. Look for entries indicating a “delete” operation within your sample directory and not down the the process id for the action.

# ausearch -k sample_file_deletion
...
----
time->Sat Jun 16 04:02:25 2018
type=PROCTITLE msg=audit(1529121745.550:323): proctitle=726D002D69002F7661722F746D702F746573745F66696C65
type=PATH msg=audit(1529121745.550:323): item=1 name="/var/tmp/test_file" inode=16934921 dev=ca:01 mode=0100644 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:user_tmp_t:s0 objtype=DELETE cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0
type=PATH msg=audit(1529121745.550:323): item=0 name="/var/tmp/" inode=16819564 dev=ca:01 mode=041777 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tmp_t:s0 objtype=PARENT cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0
type=CWD msg=audit(1529121745.550:323):  cwd="/root"
type=SYSCALL msg=audit(1529121745.550:323): arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=9930c0 a2=0 a3=7ffe9f8f2b20 items=2 ppid=2358 pid=2606 auid=1001 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=2 comm="rm" exe="/usr/bin/rm" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="sample_file_deletion"

As you can see in the above log, the user root(uid=0) deleted(exe=”/usr/bin/rm”) the file /var/tmp/test_file. Note down the the ppid=2358 pid=2606 as well. If the file is deleted by a script or cron, you would need these to track the script or cron.

7. Linking Process ID to Process Name using psacct

The audit logs will contain a process ID (PID) associated with the deletion. Utilize this PID to identify the potentially responsible process:

Process Information from dump-acct

After stopping process accounting recording with sudo /usr/sbin/accton off, analyze the captured data:

sudo dump-acct /var/log/account/pacct
This output shows various process details, including PIDs, command names, and timestamps. However, due to the nature of process accounting, it might not directly pinpoint the culprit. Processes might have terminated after the deletion, making it challenging to definitively identify the responsible one. You can grep the ppid or pid we received from audit log against the output of the dump-acct command.

sudo dump-acct /var/log/account/pacct | tail
grotty          |v3|     0.00|     0.00|     2.00|  1000|  1000| 12000.00|     0.00|  321103|  321101|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
groff           |v3|     0.00|     0.00|     2.00|  1000|  1000|  6096.00|     0.00|  321101|  321095|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
nroff           |v3|     0.00|     0.00|     4.00|  1000|  1000|  2608.00|     0.00|  321095|  321087|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
man             |v3|     0.00|     0.00|     4.00|  1000|  1000| 10160.00|     0.00|  321096|  321087| F   |       0|pts/1   |Fri Aug 14 13:26:07 2020
pager           |v3|     0.00|     0.00|  2018.00|  1000|  1000|  8440.00|     0.00|  321097|  321087|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
man             |v3|     2.00|     0.00|  2021.00|  1000|  1000| 10160.00|     0.00|  321087|  318116|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
clear           |v3|     0.00|     0.00|     0.00|  1000|  1000|  2692.00|     0.00|  321104|  318116|     |       0|pts/1   |Fri Aug 14 13:26:30 2020
dump-acct       |v3|     2.00|     0.00|     2.00|  1000|  1000|  4252.00|     0.00|  321105|  318116|     |       0|pts/1   |Fri Aug 14 13:26:35 2020
tail            |v3|     0.00|     0.00|     2.00|  1000|  1000|  8116.00|     0.00|  321106|  318116|     |       0|pts/1   |Fri Aug 14 13:26:35 2020
clear           |v3|     0.00|     0.00|     0.00|  1000|  1000|  2692.00|     0.00|  321107|  318116|     |       0|pts/1   |Fri Aug 14 13:26:45 2020

To better understand what you’re looking at, you may want to add column headings as I have done with these commands:

echo "Command vers runtime systime elapsed UID GID mem_use chars PID PPID ? retcode term date/time" "
sudo dump-acct /var/log/account/pacct | tail -5

Command         vers  runtime   systime   elapsed    UID    GID   mem_use     chars      PID     PPID  ?   retcode   term     date/time
tail            |v3|     0.00|     0.00|     3.00|     0|     0|  8116.00|     0.00|  358190|  358188|     |       0|pts/1   |Sat Aug 15 11:30:05 2020
pacct           |v3|     0.00|     0.00|     3.00|     0|     0|  9624.00|     0.00|  358188|  358187|S    |       0|pts/1   |Sat Aug 15 11:30:05 2020
sudo            |v3|     0.00|     0.00|     4.00|     0|     0| 10984.00|     0.00|  358187|  354579|S    |       0|pts/1   |Sat Aug 15 11:30:05 2020
gmain           |v3|    14.00|     3.00|  1054.00|  1000|  1000|  1159680|     0.00|  358169|    3179|    X|       0|__      |Sat Aug 15 11:30:03 2020
vi              |v3|     0.00|     0.00|   456.00|  1000|  1000| 10976.00|     0.00|  358194|  354579|     |       0|pts/1   |Sat Aug 15 11:30:28 2020

Alternative: lastcomm (Limited Effectiveness)

In some cases, you can try lastcomm to potentially retrieve the command associated with the PID, even if the process has ended. However, its effectiveness depends on system configuration and might not always be reliable.

Important Note

While combining auditd with process accounting can provide insights, it’s crucial to understand the limitations. Process accounting data offers a broader picture of resource usage but doesn’t directly correlate to specific file deletions. Additionally, processes might terminate quickly, making it difficult to trace back to a specific action.

Best Practices

1. Regular Monitoring: Regularly monitor and analyze audit logs to stay ahead of potential security breaches.
2. Comprehensive Logging: Ensure comprehensive logging by setting appropriate audit rules and keeping process accounting enabled.
3. Timely Responses: Respond quickly to any suspicious activity by investigating audit logs and process accounting data promptly.

By combining the capabilities of auditd and process accounting, you can enhance your ability to track and understand file system activity, thereby strengthening your system’s security posture.

Demystifying Containers and Orchestration: A Beginner’s Guide

In today’s fast-paced world of software development, speed and efficiency are crucial. Containerization and container orchestration technologies are revolutionizing how we build, deploy, and manage applications. This blog post will break down these concepts for beginners, starting with the fundamentals of containers and then exploring container orchestration with a focus on Kubernetes, the industry leader.

1. What are Containers?

Imagine a shipping container. It’s a standardized unit that can hold various cargo and be easily transported across different modes of transportation (ships, trucks, trains). Similarly, a software container is a standardized unit of software that packages code and all its dependencies (libraries, runtime environment) into a lightweight, portable package.


Benefits of Containers:

  • Portability: Containers run consistently across different environments (physical machines, virtual machines, cloud platforms) due to their standardized nature.
  • Isolation: Each container runs in isolation, sharing resources with the operating system but not with other containers, promoting security and stability.
  • Lightweight: Containers are much smaller than virtual machines, allowing for faster startup times and efficient resource utilization.

    2. What is Docker?

    Docker is a free and open-source platform that provides developers with the tools to build, ship, and run applications in standardized units called containers. Think of Docker as a giant toolbox containing everything you need to construct and manage these containers.

    Here’s how Docker is involved in containerization:

  • Building Images: Docker allows you to create instructions (Dockerfile) defining the environment and dependencies needed for your application. These instructions are used to build lightweight, portable container images that encapsulate your code.
  • Running Containers: Once you have an image, Docker can run it as a container instance. This instance includes the application code, libraries, and runtime environment, all packaged together.
  • Sharing Images: Docker Hub, a public registry, allows you to share and discover container images built by others. This promotes code reuse and simplifies development.



    Benefits of Using Docker:

  • Faster Development: Docker simplifies the development process by ensuring a consistent environment across development, testing, and production.
  • Portability: Containerized applications run consistently on any system with Docker installed, regardless of the underlying operating system.
  • Efficiency: Containers are lightweight and share the host operating system kernel, leading to efficient resource utilization.

    3. What is Container Orchestration?
    As the number of containers in an application grows, managing them individually becomes cumbersome. Container orchestration tools automate the deployment, scaling, and management of containerized applications. They act as a conductor for your containerized orchestra.

    Key Features of Container Orchestration:

  • Scheduling: Orchestrators like Kubernetes determine where to run containers across available resources.
  • Scaling: They can automatically scale applications up or down based on demand.
  • Load Balancing: Orchestrators distribute incoming traffic across multiple container instances for an application, ensuring stability and high availability.
  • Health Monitoring: They monitor the health of containers and can restart them if they fail.

    4. What is Kubernetes?

    Kubernetes, often shortened to K8s, is an open-source system for automating container deployment, scaling, and management. It’s the most popular container orchestration platform globally due to its scalability, flexibility, and vibrant community.

    Thinking of Kubernetes as a City (Continued):

    Imagine Kubernetes as a city that manages tiny houses (containers) where different microservices reside. Kubernetes takes care of:

  • Zoning: Deciding where to place each tiny house (container) based on resource needs.
  • Traffic Management: Routing requests to the appropriate houses (containers).
  • Utilities: Providing shared resources (like storage) for the houses (containers).
  • Maintenance: Ensuring the houses (containers) are healthy and restarting them if needed.

    Example with a Simple Web App:

    Let’s say you have a simple web application with a front-end written in Node.js and a back-end written in Python (commonly used for web development). You can containerize each component (front-end and back-end) and deploy them on Kubernetes. Kubernetes will manage the deployment, scaling, and communication between these containers.

    Benefits of Kubernetes:

  • Scalability: Easily scale applications up or down to meet changing demands.
  • Portability: Deploy applications across different environments (on-premise, cloud) with minimal changes.
  • High Availability: Kubernetes ensures your application remains available even if individual containers fail.
  • Rich Ecosystem: A vast ecosystem of tools and integrations exists for Kubernetes.

    5. How Docker Relates to Container Orchestration and Kubernetes
    Docker focuses on building, sharing, and running individual containers. While Docker can be used to manage a small number of containers, container orchestration tools like Kubernetes become essential when you have a complex application with many containers that need to be deployed, scaled, and managed efficiently.

    Think of Docker as the tool that builds the tiny houses (containers), and Kubernetes as the city planner and manager that oversees their placement, operations, and overall well-being.

    Getting Started with Docker and Kubernetes:
    There are several resources available to get started with Docker and Kubernetes:

    Docker: https://docs.docker.com/guides/getting-started/ offers tutorials and documentation for beginners.
    Kubernetes: https://kubernetes.io/docs/home/ provides comprehensive documentation and getting started guides.
    Online Courses: Many platforms like Udemy and Coursera offer beginner-friendly courses on Docker and Kubernetes.

    Conclusion

    Containers and container orchestration offer a powerful approach to building, deploying, and managing applications. By understanding Docker, containers, and orchestration tools like Kubernetes,

  • Securing Your Connections: A Guide to SSH Key authentication

    Securing Your Connections: A Guide to SSH Keys

    SSH (Secure Shell) is a fundamental tool for securely connecting to remote servers. While traditional password authentication works, it can be vulnerable to brute-force attacks. SSH keys offer a more robust and convenient solution for secure access.
    SSH authentication using SSH keys

    This blog post will guide you through the world of SSH keys, explaining their types, generation process, and how to manage them for secure remote connections and how to configure SSH key authentication.

    Understanding SSH Keys: An Analogy
    Imagine your home has two locks:

  • Combination Lock (Password): Anyone can access your home if they guess the correct combination.
  • High-Security Lock (SSH Key): Only someone with a specific physical key (your private key) can unlock the door.

    Similarly, SSH keys work in pairs:

  • Private Key: A securely stored key on your local machine. You never share this.
  • Public Key: A unique identifier you share with the server you want to access.
    The server verifies the public key against your private key when you attempt to connect. This verification ensures only authorized users with the matching private key can access the server.

    Types of SSH Keys
    There are many types of SSH keys, we are discussing the two main ones:

    RSA (Rivest–Shamir–Adleman): The traditional and widely supported option. It offers a good balance of security and performance.
    Ed25519 (Edwards-curve Digital Signature Algorithm): A newer, faster, and potentially more secure option gaining popularity.

    RSA vs. Ed25519 Keys:

  • Security: Both are considered secure, but Ed25519 might offer slightly better theoretical resistance against certain attacks.
  • Performance: Ed25519 is generally faster for both key generation and signing/verification compared to RSA. This can be beneficial for slower connections or resource-constrained devices.
  • Key Size: RSA keys are typically 2048 or 4096 bits, while Ed25519 keys are 256 bits. Despite the smaller size, Ed25519 offers comparable security due to the underlying mathematical concepts.
  • Compatibility: RSA is widely supported by all SSH servers. Ed25519 is gaining popularity but might not be universally supported on older servers.

    Choosing Between RSA and Ed25519:

    For most users, Ed25519 is a great choice due to its speed and security. However, if compatibility with older servers is a critical concern, RSA remains a reliable option.

    Generating SSH Keys with ssh-keygen
    Here’s how to generate your SSH key pair using the ssh-keygen command:

    Open your terminal.

    Run the following command, replacing with your desired name for the key pair:

    ssh-keygen -t <key_type> -b 4096 -C "<your_email@example.com>"
  • <key_type>: Choose either rsa or ed25519.
  • -b 4096: Specifies the key size (4096 bits is recommended for strong security).
  • -C “<your_email@example.com”>: Adds a comment to your key (optional).

    You’ll be prompted to enter a secure passphrase for your private key. Choose a strong passphrase and remember it well (it’s not mandatory, but highly recommended for added security).

    The command will generate two files:

    <key_name>>.pub: The public key file (you’ll add this to the server).
    <key_name>>: The private key file (keep this secure on your local machine).

    Important Note: Never share your private key with anyone!

    Adding Your Public Key to the Server’s authorized_keys File

    1. Access the remote server you want to connect to (through a different method if you haven’t set up key-based authentication yet).
    2. Locate the ~/.ssh/authorized_keys file on the server (the ~ represents your home directory). You might need to create the .ssh directory if it doesn’t exist.
    3. Open the authorized_keys file with a text editor.
    4. Paste the contents of your public key file (.pub) into the authorized_keys file on the server.
    5. Save the authorized_keys file on the server.

    Permissions:

    Ensure the authorized_keys file has permissions set to 600 (read and write access only for the owner).

    Connecting with SSH Keys
    Once you’ve added your public key to the server, you can connect using your private key:

    ssh <username>@<server_address>

    You’ll be prompted for your private key passphrase (if you set one) during the connection. That’s it! You’re now securely connected to the server without needing a password.

    Benefits of SSH Keys:

  • Enhanced Security: More secure than password authentication, making brute-force attacks ineffective.
  • Convenience: No need to remember complex passwords for multiple servers.
  • Faster Logins: SSH key-based authentication is often faster than password authentication.

    By implementing SSH keys, you can significantly improve the security and convenience of your remote server connections. Remember to choose strong passwords and keep your private key secure for optimal protection.

  • Install the free SSL Certificate on the server’s hostname – cPanel WHM server


    cPanel and WHM (WebHost Manager) is a popular web hosting control panels that allow server administrators to manage web hosting services efficiently. Among their many features, cPanel offers a handy tool called AutoSSL, which provides free SSL certificates for added security. In this guide, I will show you how to use AutoSSL to secure your server’s hostname.

    Step 1: The checkallsslcerts Script

    The checkallsslcerts Script is used by cPanel to issue SSL certificates for server hostname. It’s important to note that checkallsslcerts
    runs as part of the nightly update checks performed on your system. These updates include cPanel’s own update script, upcp (cPanel update script).

    Step 2: When to Manually Run AutoSSL

    In most cases, checkallsslcerts will take care of securing your server’s hostname during the nightly updates. However, there may be instances when you want to update the SSL certificate manually. This is especially useful if you’ve recently changed your server’s hostname and want to ensure the SSL certificate is updated immediately.

    Step 3: Understanding the checkallsslcerts Script

    The `/usr/local/cpanel/bin/checkallsslcerts` script is responsible for checking and installing SSL certificates for your server’s hostname. Here’s what the script does:

    – It creates a Domain Control Validation (DCV) file.
    – It performs a DNS lookup for your hostname’s IP address.
    – It checks the DCV file using HTTP validation (for cPanel & WHM servers).
    – If needed, it sends a request to Sectigo to issue a new SSL certificate.
    – It logs the Sectigo requests for validation.

    You can learn more about the checkallsslcerts script and it’s usage in this article from cPanel:

    Step 4: How to Manually Execute the Script

    To manually run the script, use the following command:

    /usr/local/cpanel/bin/checkallsslcerts [options]

    You can use options like `–allow-retry` and `–verbose` as needed.

    Step 5: Troubleshooting and Tips

    If you encounter issues with the SSL certificate installation, the script will provide helpful output to troubleshoot the problem. Ensure that your server’s firewall allows access from Sectigo’s IP addresses mentioned in the guide.

    Common Issue: Unable to obtain a free hostname certificate due to 404 when DCV check runs in /usr/local/cpanel/bin/checkallsslcerts

    After running the /usr/local/cpanel/bin/checkallsslcerts script via SSH, you may see errors similar to the following:

    FAILED: Cpanel::Exception/(XID bj6m2k) The system queried for a temporary file at “http://hostname.domain.tld/.well-known/pki-validation/B65E7F11E8FBB1F598817B68746BCDDC.txt”, but the web server responded with the following error: 404 (Not Found). A DNS (Domain Name System) or web server misconfiguration may exist.
    [WARN] The system failed to acquire a signed certificate from the cPanel Store because of the following error: Neither HTTP nor DNS DCV preflight checks succeeded!

    Description:
    Encountering errors like “404 Not Found” during the DCV check when running /usr/local/cpanel/bin/checkallsslcerts via SSH? This issue typically arises when the shared IP address doesn’t match the main IP. To resolve it, ensure both IPs match and that the A record for the server’s hostname points to the main/shared IP. Here’s a workaround:

    Workaround:

    1. Confirm that the main IP and shared IP are identical.
    2. Make sure the A record for the server’s hostname points to the main/shared IP.
    3. To change the shared IP:
    Log in to WHM as the ‘root’ user.

  • Navigate to “Home » Server Configuration » Basic WebHost Manager® Setup.”
  • Update “The IPv4 address (only one address) to use to set up shared IPv4 virtual hosts” to match the main IP.
  • Click “Save Changes” and then execute the following via SSH or Terminal in WHM:
    /scripts/rebuildhttpdconf
    /scripts/restartsrv_httpd --hard

    This will help resolve issues with obtaining a free hostname certificate in cPanel/WHM.

    Conclusion

    Securing your cPanel/WHM server’s hostname with a free SSL certificate from AutoSSL is essential for a secure web hosting environment. By following these steps, you can ensure that your server’s hostname is protected with a valid SSL certificate.

    Remember to regularly check your SSL certificates to ensure they remain up-to-date and secure.

  • How to Install nopCommerce on Ubuntu Linux with Nginx Reverse Proxy and SSL: Step-by-Step Guide

    nopCommerce is an open-source e-commerce platform that allows users to create and manage their online stores. It is built on the ASP.NET Core framework and supports multiple database systems, including MySQL, Microsoft SQL Server, and PostgreSQL as it’s backend. The platform is highly customizable and offers a wide range of features, including product management, order processing, shipping, payment integration, and customer management. nopCommerce is a popular choice for businesses of all sizes because of its flexibility, scalability, and user-friendly interface.
    In this tutorial, we will guide you through the process of installing nopCommerce on Ubuntu Linux with Nginx reverse proxy and SSL.

    Register Microsoft key and feed
    To register the Microsoft key and feed, launch the terminal and execute these commands:

    1. Download the packages-microsoft-prod.deb file by running the command:

    wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb

    2. Install the packages-microsoft-prod.deb package by running the command:

    sudo dpkg -i packages-microsoft-prod.deb

    Install the .NET Core Runtime
    To install the .NET Core Runtime, perform the following steps:

    1. Update the available product listings for installation by running the command:

    sudo apt-get update

    2. Install the .NET runtime by running the command:

    sudo apt-get install -y apt-transport-https aspnetcore-runtime-7.0

    To determine the appropriate version of the .NET runtime to install, you should refer to the documentation provided by nopCommerce, which takes into account both the version of nopCommerce you are using and the Ubuntu OS version. Refer to the link below:

    https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu
    https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu#supported-distributions

    3. Verify the installed .Net Core runtimes by running the command:

    dotnet --list-runtimes


    4. Install the libgdiplus library:

    sudo apt-get install libgdiplus

    libgdiplus is an open-source implementation of the GDI+ API that provides access to graphic-related functions in nopCommerce and is required for running nopCommerce on Linux.

    Install MySql Server
    Latest nopCommerce support latest MySQL and MariaDB versions. We will install the latest MariaDB 10.6.

    1. To install mariadb-server for nopCommerce, execute the following command in the terminal:

    sudo apt-get install mariadb-server

    2. After installing MariaDB Server, you need to set the root password. Execute the following command in the terminal to set the root password:

    sudo /usr/bin/mysql_secure_installation

    This will start a prompt to guide you through the process of securing your MySQL installation and setting the root password.

    3. Create a database and User. We will use these details while installing nopCommerce. Replace the names of the database and the database user accordingly.

    mysql -u root -p
    create database  nopCommerceDB;
    grant all on nopCommerceDB.* to nopCommerceuser@localhost identified by 'P@ssW0rD';

    Please replace the database name, username and password accordingly.

    4. Reload privilege tables and exit the database.

    flush privileges;
    quit;

    Install nginx

    1. To install Nginx, run the following command:

    sudo apt-get install nginx

    2. After installing Nginx, start the service by running:

    sudo systemctl start nginx

    3. You can verify the status of the service using the following command:

    sudo systemctl status nginx


    4. Nginx Reverse proxy configuration
    To configure Nginx as a reverse proxy for your nopCommerce application, you’ll need to modify the default Nginx configuration file located at /etc/nginx/sites-available/nopcommerce.linuxwebhostingsupport.in. Open the file in a text editor and replace its contents with the following:

    server {
    
        server_name nopcommerce.linuxwebhostingsupport.in;
    
    	listen 80;
        listen [::]:80;
    
    
      location / {
        proxy_pass         http://localhost:5000;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection keep-alive;
        proxy_set_header   Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        }
    
    }
    

    You need to replace nopcommerce.linuxwebhostingsupport.in with your domain name
    5. Enable the virtual host configuration file:
    Enable the server block by creating a symbolic link in the /etc/nginx/sites-enabled directory:
    sudo ln -s /etc/nginx/sites-available/nopcommerce.linuxwebhostingsupport.in /etc/nginx/sites-enabled/

    6. Reload Nginx for the changes to take effect:

    sudo systemctl reload Nginx

    Install NopCommerce

    In this example, we’ll use /var/www/nopCommerce for storing the files.

    1. Create a directory:

    sudo mkdir /var/www/nopCommerce

    2. Navigate to the directory where you want to store the nopCommerce files, Download and unpack nopCommerce:

    cd /var/www/nopCommerce
    sudo wget https://github.com/nopSolutions/nopCommerce/releases/download/release-4.60.2/nopCommerce_4.60.2_NoSource_linux_x64.zip
    sudo apt-get install unzip
    sudo unzip nopCommerce_4.60.2_NoSource_linux_x64.zip

    3. Create two directories that nopCommerce needs to run properly:

    sudo mkdir bin
    sudo mkdir logs

    4. Change the ownership of the nopCommerce directory and its contents to the www-data group:

    sudo chown -R www-data.www-data  /var/www/nopCommerce/

    www-data is the user Nginx webserver runs.

    Create the nopCommerce service

    1. Create a file named nopCommerce.service in the /etc/systemd/system directory with the following content:

    [Unit]
    Description=Example nopCommerce app running on Xubuntu
    
    [Service]
    WorkingDirectory=/var/www/nopCommerce
    ExecStart=/usr/bin/dotnet /var/www/nopCommerce/Nop.Web.dll
    Restart=always
    # Restart service after 10 seconds if the dotnet service crashes:
    RestartSec=10
    KillSignal=SIGINT
    SyslogIdentifier=nopCommerce-example
    User=www-data
    Environment=ASPNETCORE_ENVIRONMENT=Production
    Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false
    
    [Install]
    WantedBy=multi-user.target

    2. Start the nopCommerce service by running:

    sudo systemctl start nopCommerce.service

    3. To check the status of the nopCommerce service, use the following command:

    sudo systemctl status nopCommerce.service

    Also, check if the service is running on port 5000

    sudo lsof -i:5000

    4. After that, restart the nginx server:

    sudo systemctl restart nginx

    Now that the prerequisites are installed and configured, you can proceed to install and set up your nopCommerce store.

    Install nopCommerce
    After completing the previous steps, you can access the website through the following URL: http://nopcommerce.linuxwebhostingsupport.in. Upon visiting the site for the first time, you will be automatically redirected to the installation page as shown below:

    Provide the following information in the Store Information panel:

  • Admin user email: This is the email address of the first administrator for the website.
  • Admin user password: You must create a password for the administrator account.
  • Confirm password: Confirm the admin user password.
  • Country: Choose your country from the dropdown list. By selecting a country, you can configure your store with preinstalled language packs, preconfigured settings, shipping details, VAT settings, currencies, measures, and more.
  • Create sample data: Check this box if you want sample products to be created. It is recommended so that you can start working with your website before adding your own products. You can always delete or unpublish these items later.

    In the Database Information panel, you will need to provide the following details:

  • Database: Select either Microsoft SQL Server, MySQL, or PostgreSQL. Since, we are installing nopCommerce on Linux and MariaDB, choose the MySQL.
  • Create database if it doesn’t exist: We recommend creating your database and database user ahead of time to ensure a successful installation. Simply create a database instance and add the database user to it. The installation process will create all the tables, stored procedures, and more. Uncheck this option since we can use the database and database user we created earlier.
  • Enter raw connection string (advanced): Select this option if you prefer to enter a Connection string instead of filling the connection fields. For now, leave this unchecked
  • Server name: This is the IP, URL, or server name of your database. Use “localhost”.
  • Database name: This is the name of the database used by nopCommerce. Use the database we created earlier.
  • Use integrated Windows authentication: Leave it unchecked
  • SQL Username: Enter your database user name we created earlier.
  • SQL Password: Use your database user password we used earlier.
  • Specify custom collation: Leave this advanced setting empty.

    Click on the Install button to initiate the installation process. Once the installation is complete, the home page of your new site will be displayed. Access your site from the following URL: http://nopcommerce.linuxwebhostingsupport.in.


    Note:
    You can reset a nopCommerce website to its default settings by deleting the appsettings.json file located in the App_Data folder.

    Adding and Securing the nopCommerce
    We will be using Let’s Encrypt to add free and secure SSL certificate.
    Let’s Encrypt is a free, automated, and open certificate authority that allows you to obtain SSL/TLS certificates for your website. Certbot is a command-line tool that automates the process of obtaining and renewing these certificates, making it easier to secure your website with HTTPS.

    Here are the steps to install SSL with Certbot Nginx plugins:

    1.Install Certbot: First, make sure you have Certbot installed on your server. You can do this by running the following command:

    sudo apt-get update
    sudo apt-get install certbot python3-certbot-nginx

    2. Obtain SSL Certificate: Next, you need to obtain an SSL certificate for your domain. You can do this by running the following command:
    sudo certbot –nginx -d yourdomain.com

    Replace yourdomain.com with your own domain name. This command will automatically configure Nginx to use SSL, obtain a Let’s Encrypt SSL certificate and set an automatic redirect from http to https.

    3.Verify SSL Certificate: Once the certificate is installed, you can verify it by visiting your website using the https protocol. If the SSL certificate is valid, you should see a padlock icon in your browser’s address bar.

    4. Automatic Renewal: Certbot SSL certificates are valid for 90 days. To automatically renew your SSL certificate before it expires, you can set up a cron job to run the following command:

    sudo certbot renew --quiet

    This will check if your SSL certificate is due for renewal and automatically renew it if necessary.

    5. nopCommerce also recommend turning “UseProxy setting to true in the appsettings.json file located in the App_Data folder if we are using SSL. So change this value too.


    nopCommerce is a popular open-source e-commerce platform that offers users a flexible and scalable solution for creating and managing online stores. In this tutorial, we provided a step-by-step guide for installing and configuring nopCommerce on Ubuntu Linux with Nginx reverse proxy and SSL. We covered the installation of Microsoft key and feed, .NET Core Runtime, MySQL server, and Nginx reverse proxy. We also discussed how to configure Nginx as a reverse proxy for the nopCommerce application. By following this tutorial, you can set up a secure and reliable nopCommerce e-commerce store on Ubuntu Linux.

  • How to Share Files Between a Hyper-V Windows Host and Ubuntu guest VM

    Introduction

    Sharing files between a Windows host and Ubuntu virtual machine (VM) can be essential when you need to transfer data or collaborate between different environments. While the Hyper-V virtualization platform makes it easy to create Ubuntu VMs on a Windows host, the process of sharing files between the two systems can be a bit more complex. This is because Windows and Ubuntu use different file systems and protocols to access shared resources.

    In this blog post, we will walk you through the steps required to share files between a Hyper-V Windows host and Ubuntu VM using the Common Internet File System (CIFS) protocol. This method allows you to mount a Windows shared folder on Ubuntu, giving you access to files on the Windows host as if they were on the Ubuntu machine itself. We will also cover the process of setting up a new Windows local user for authentication, creating a shared folder, and enabling network settings in Hyper-V. By the end of this guide, you will have a fully functional file sharing system that works seamlessly between Windows and Ubuntu.

    1. Create a new Windows local user for sharing and authentication
    To access a Windows shared folder from Ubuntu, you need to provide valid credentials for a user account that has permissions to access the shared folder. For security reasons, it’s not recommended to use your Windows user account for this purpose, as it could potentially expose your system to security risks. Instead, it’s best to create a new local user account that’s dedicated solely to file sharing.

    Step-by-step guide for creating a new user in Windows
    1. Press “Windows Key + R” on your keyboard to open the Run dialog box.
    2. Type “netplwiz” in the box and click on “OK.”
    3. In the User Accounts window that appears, click on the “Add” button.

    4. Select “Sign in without a Microsoft account (not recommended)” at the bottom of the screen.
    5. Click on “Local account” and then click on “Next.”
    6. Enter a username and password for the new user and then click on “Next.”
    7. You can choose whether to set a password hint for the new user account or not. Click on “Next” to proceed.
    8. Click on “Finish” to complete the process.

    I have created a user called “shareuser” with password as 123456. But please always use stronger password. Mine is just a test environment.

    2. Create a Windows folder and enable sharing
    In this step, we will create a new folder in Windows and enable sharing so that it can be accessed from our Ubuntu VM.

    1. Open File Explorer and navigate to the location where you want to create a new folder.
    2. Right-click on the empty space and select “New” > “Folder”.
    3. Name the folder and press “Enter” on your keyboard.
    4. Right-click on the newly created folder and select “Properties”.
    5. In the Properties window, click on the “Sharing” tab.
    6. Click on the “Share” button.
    7. In the “Choose People to Share With” window, enter the name of the user you created earlier (e.g. “shareuser”).

    8. Click on “Add” and then click on the “Share” button.
    9. The folder should now be shared with the user you specified.

    Note: If you don’t see the “Sharing” tab in the folder properties window, you may need to enable file and printer sharing in Windows by going to “Control Panel” > “Network and Sharing Center” > “Change advanced sharing settings” and selecting “Turn on file and printer sharing”. And I will be replacing the hostname “WAHAB” with an IP address in later stages.

    Once you have shared the folder with the user, you can access it from your Ubuntu VM using the SMB protocol.

    3. Enable default or external type network for VMs in Hyper-V
    By default, the virtual machines are connected to the “Default Switch”. To communicate between the host and guest VM, you need to either use this default switch or external type virtual switch. If Private network type switch are used the Windows host will not be able to communicate or transfer files with guest VMs.

    1. Open the Hyper-V Manager on the Windows host machine.
    2. Select the virtual machine you want to connect to the network.
    3. In the right-hand pane, click on “Settings”.
    4. Click on “Network Adapter” and select “Virtual Switch” as the connection type.
    5. Select either the “Default Switch” or an “External” virtual switch that you have previously created.
    6. Click “OK” to save changes.
    7. Start the virtual machine.

    4. Find the private IP of Windows HyperV host
    The private IP of the Windows host is needed to establish a connection between the Windows host and the Ubuntu VM. In order for the Ubuntu VM to access files on the Windows host, it needs to know the private IP address of the host so that it can connect to it over the network

    1. Open the Command Prompt on the Windows host machine by pressing the Windows key + R and then typing “cmd” in the Run dialog box.
    2. In the Command Prompt, type “ipconfig” and press Enter.
    3. Look for the IPv4 Address entry. The number listed next to this entry is the private IP address of the Windows host.
    Note: The private IP address is usually in the format of “192.168.x.x” or “172.x.x.x”.

    5. Check folder shared is accessible using smbclient from Ubuntu VM

    smbclient is a command-line tool used to connect to Windows and Samba file servers. It allows us to browse and manipulate files and directories on remote servers using the Server Message Block (SMB) protocol.

    In this step, we will use smbclient to verify if the shared folder is accessible from the Ubuntu VM.

    Step-by-step guide for checking if the share is accessible using smbclient:

    1. Open the terminal on Ubuntu VM.
    2. Install smbclient if it’s not installed using the following command:

    sudo apt-get install smbclient

    3. Connect to the shared folder using the following command:

    smbclient -U shareuser //172.30.96.1/FolderToShare

    Note: Replace “shareuser” with the username of the Windows local user you created and “FolderToShare” with the name of the shared folder you created
    4. Enter the password for the Windows local user when prompted.
    5. If the connection is successful, you should see a prompt like this:

    6. Mount the Windows share using cifs
    CIFS (Common Internet File System) is a network protocol that allows Linux systems to access and share files and directories with Windows operating systems. It is needed to mount the Windows share on the Ubuntu VM so that the Ubuntu user can access the shared files and directories.

    Here are the step-by-step instructions for mounting the Windows share using cifs:

    Create a directory where you want to mount the Windows share. For example, let’s create a directory called “windows_share” under the home directory:

    mkdir ~/windows_share

    Install the cifs-utils package if it’s not already installed on the Ubuntu VM:

    sudo apt-get update
    sudo apt-get install cifs-utils

    Create a file and add the Windows user credentials in it.

    nano /home/wahab/.smbcredentials
    
    username=shareuser
    password=123456

    Replace “shareuser” with the username you created for file sharing on your Windows host.
    Replace “123456” with the password you set for the user.

    Open the “/etc/fstab” file using a text editor with root privileges, such as nano:

    sudo nano /etc/fstab

    Add the following line at the end of the file:

    //172.30.96.1/FolderToShare /home/wahab/windows_share cifs credentials=/home/wahab/.smbcredentials,uid=wahab,gid=wahab 0 0

    Replace “172.30.96.1” with your Windows host IP address.
    Replace “FolderToShare” with the name of the shared folder on your Windows host.
    “/home/wahab/windows_share” will be the folder you mounting your Windows share. So you may choose different one as per your need.
    The “uid” and “gid” options set the ownership of the mounted directory to the Ubuntu user “wahab”, replace them with yours.
    The “0 0” options indicate that the filesystem should be dumped and checked by default.
    Save and close the file.

    Here’s how to mount and unmount the Windows share as the Ubuntu user “wahab”:

    To mount the share:

    sudo mount ~/windows_share

    Check if the Windows share is mounted by running a “df -h” command

    To unmount the share:

    sudo umount ~/windows_share

    In conclusion, sharing files between a Windows host and Ubuntu VM can be accomplished through the use of the Common Internet File System (CIFS) protocol. The process involves creating a new Windows local user for sharing and authentication, creating a Windows folder and enabling sharing, and configuring the network settings in Hyper-V. Once these steps are completed, you can easily access the shared folder from your Ubuntu VM as if it were on the local machine. It’s important to ensure that you follow security best practices by using a dedicated user account for file sharing and setting a strong password. With these steps, you can establish a seamless and secure file sharing system between Windows and Ubuntu.

    Installing PHP GEOS module on a RunCloud Server

    PHP GEOS is a PHP extension for geographic objects support, while RunCloud is a cloud server control panel designed for PHP applications. With PHP GEOS module installed on RunCloud, PHP applications can take advantage of geographic data and use the GEOS (Geometry Engine – Open Source) library to perform spatial operations.

    In this blog post, I will show you how to install PHP GEOS module on RunCloud module.

    Steps
    1. Install the required development tools

    Before installing the PHP GEOS module, make sure that the required development tools are installed on your Ubuntu server. You can install them by running the following command:

    apt-get install autoconf

    2. Install GEOS library
    Next, download and install the latest GEOS (Geometry Engine – Open Source)

    wget http://download.osgeo.org/geos/geos-3.9.4.tar.bz2
    tar xvf geos-3.9.4.tar.bz2
    cd geos-3.9.4/
    ./configure
    make
    make install

    3. Install PHP GEOS module

    Now, it’s time to install the PHP GEOS module. Follow the steps below to install it for PHP 8.2:

    # Set module name
    MODULE_NAME="geos"

    Download the latest module files

    git clone https://git.osgeo.org/gitea/geos/php-geos.git
    mv php-geos/ php-geos_PHP82

    # make clean will always fail if you never compile it before
    make clean
    /RunCloud/Packages/php82rc/bin/phpize --clean
    /RunCloud/Packages/php82rc/bin/phpize
    ./configure --with-php-config=/RunCloud/Packages/php82rc/bin/php-config
    make && make install

    This will install geos.so in the correct php extension directory

    4. Add the module to PHP.ini file
    echo "extension=$MODULE_NAME.so" > /etc/php82rc/conf.d/$MODULE_NAME.ini

    And finally restart the PHP FPM service
    systemctl restart php82rc-fpm

    It’s important to note that the above steps are specific to PHP 8.2. If you wish to install the module for a different version, you will need to modify the commands accordingly. For instance, you can replace PHP 8.2 with 8.1 with below changes:
    Replace /RunCloud/Packages/php82rc/bin/phpize with /RunCloud/Packages/php81rc/bin/phpize, replace ./configure –with-php-config=/RunCloud/Packages/php82rc/bin/php-config with ./configure –with-php-config=/RunCloud/Packages/php81rc/bin/php-config, replace /etc/php82rc/conf.d/$MODULE_NAME.ini with /etc/php81rc/conf.d/$MODULE_NAME.ini, and replace systemctl restart php82rc-fpm with systemctl restart php81rc-fpm.

    You can contact me if you need help with installing any custom modules on RunCloud control panel.

    Downgrading PHP Version on Bitnami WordPress in AWS Lightsail instance

    Hi all

    Recently, I helped one of my clients who was using an Amazon Lightsail WordPress instance provided by Bitnami. Bitnami is advantageous in that it provides a fully working stack, so you don’t have to worry about configuring LAMP or environments. You can find more information about the Bitnami Lightsail stack here.

    However, the client’s stack was using the latest PHP 8.x version, while the WordPress site he runs uses several plugins that need PHP 7.4. I advised the client to consider upgrading the website to support the latest PHP versions. However, since that would require a lot of work, and he wanted the site to be up and running, he decided to downgrade PHP.

    The issue with downgrading or upgrading PHP on a Bitnami stack is that it’s not possible. Bitnami recommends launching a new server instance with the required PHP, MySQL, or Apache version and migrating the data over. So, I decided to do it manually.

    Here are the server details:

    Debian 11
    Current installed PHP: 8.1.x

    Upgrading or downgrading PHP versions on a Bitnami stack is essentially the same as on a normal Linux server. In short, you need to:

    Ensure the PHP packages for the version you want are installed.
    Update any configuration for that PHP version.
    Update your web server configuration to point to the correct PHP version.
    Point PHP CLI to the correct PHP version.
    Restart your web server and php-fpm.

    What we did was install the PHP version provided by the OS. Then, we updated php.ini to use the non-default MySQL socket location used by the Bitnami server. We created a php-fpm pool that runs as the “daemon” user. After that, we updated the Apache configuration to use the new PHP version.

    1. Make sure packages for your target version of PHP are installed
    To make sure that the correct packages are available on your system for the PHP version you want, first make sure your system is up to date by running these commands:

    sudo apt update
    sudo apt upgrade
    If it prompts you to do anything with config files, usually, you should just go with the default option and leave the current config as-is. Then, install the packages you need. For example, you can use the following command to install common PHP packages and modules:
    sudo apt install -y php7.4-cli php7.4-dev php7.4-pgsql php7.4-sqlite3 php7.4-gd php7.4-curl php7.4-memcached php7.4-imap php7.4-mysql php7.4-mbstring php7.4-xml php7.4-imagick php7.4-zip php7.4-bcmath php7.4-soap php7.4-intl php7.4-readline php7.4-common php7.4-pspell php7.4-tidy php7.4-xmlrpc php7.4-xsl php7.4-fpm

    2. Make sure PHP configuration for your target version is updated
    Find the mysql socket path used by your Bitnami stack by running this command:

    # ps aux | grep –color mysql.sock
    mysql 7700 1.1 2.0 7179080 675928 ? Sl Mar21 11:21 /opt/bitnami/mariadb/sbin/mysqld –defaults-file=/opt/bitnami/mariadb/conf/my.cnf –basedir=/opt/bitnami/mariadb –datadir=/bitnami/mariadb/data –socket=/opt/bitnami/mariadb/tmp/mysql.sock –pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid

    Edit php.ini file

    vi /etc/php/7.4/fpm/php.ini

    Find

    [Pdo_mysql]
    ; Default socket name for local MySQL connects. If empty, uses the built-in
    ; MySQL defaults.
    pdo_mysql.default_socket=

    Replace with

    [Pdo_mysql]
    ; Default socket name for local MySQL connects. If empty, uses the built-in
    ; MySQL defaults.
    pdo_mysql.default_socket= “/opt/bitnami/mariadb/tmp/mysql.sock”

    Find

    mysqli.default_socket =

    Replace with

    mysqli.default_socket = “/opt/bitnami/mariadb/tmp/mysql.sock”

    Create a php-fpm pool file

    vi /etc/php/8.1/fpm/pool.d/wp.conf

    [wordpress]
    env[PATH] = $PATH
    listen=/opt/bitnami/php/var/run/www2.sock
    user=daemon
    group=daemon
    listen.owner=daemon
    listen.group=daemon
    pm=dynamic
    pm.max_children=400
    pm.start_servers=260
    pm.min_spare_servers=260
    pm.max_spare_servers=300
    pm.max_requests=5000

    Feel free to adjust the PHP FPM settings to match your server specifications or needs. Check out this informative article for more tips on optimizing PHP FPM performance. Just keep in mind that Bitnami configures their stack with the listen.owner and listen.group settings set to daemon.

    This pool will listen on unix socket “/opt/bitnami/php/var/run/www2.sock”.

    Enable and restart PHP 8.1 fpm service

    systemctl enable php7.4-fpm
    systemctl restart php7.4-fpm

    3. Update your web server configuration to point to the correct PHP version

    Edit file

    vi /opt/bitnami/apache2/conf/bitnami/php-fpm.conf

    For some installations, file is located at

    vi /opt/bitnami/apache2/conf/php-fpm-apache.conf

    Inside you file find



    SetHandler “proxy:fcgi://www-fpm”

    Find and replace www.sock with www2.sock

    4. Make sure PHP-CLI points to the right PHP version

    Rename the default PHP installed by bitnami.

    mv /opt/bitnami/php/bin/php /opt/bitnami/php/bin/php_8.1_bitnami.

    create a symlink from newly installed PHP 7.4

    ln -s /usr/bin/php7.4 /opt/bitnami/php/bin/php

    Test the installed version by running below command
    ~# php -v
    PHP 7.4.33 (cli) (built: Feb 22 2023 20:07:47) ( NTS )
    Copyright (c) The PHP Group
    Zend Engine v3.4.0, Copyright (c) Zend Technologies
    with Zend OPcache v7.4.33, Copyright (c), by Zend Technologies

    5. Restart PHP-FPM and your webserver

    sudo systemctl restart php7.4-fpm; sudo /opt/bitnami/ctlscript.sh restart apache

    Best Practices for cPanel Security in 2023: Protecting Your Website and Data

    Best Practices for cPanel Security in 2023: Protecting Your Website and Data

    As the world becomes increasingly digital, the need for strong security measures to protect websites and online data has never been more pressing. For websites hosted on cPanel servers, ensuring the security of the cPanel environment is crucial to protecting both the website and the data it hosts. In 2023, the threat of cyber attacks continues to grow, making it more important than ever for website owners and system administrators to implement best practices for cPanel security. In this blog post, we’ll explore the top best practices for cPanel security in 2023, including using strong passwords, enabling two-factor authentication, keeping cPanel up-to-date with the latest security patches, using SSL certificates, and more. By implementing these best practices, website owners and system administrators can help ensure the security and integrity of their cPanel environments, and protect their websites and data from cyber threats.

    1. Use Strong Passwords

    One of the simplest and most effective ways to improve cPanel security is by using strong passwords. Weak passwords can be easily cracked by hackers, giving them access to your cPanel environment and all the websites and data hosted on it. By using strong passwords, you can help ensure that only authorized users have access to your cPanel environment, and protect your website and data from cyber threats.

    To create strong passwords, it’s important to use a mix of uppercase and lowercase letters, numbers, and symbols. Avoid using dictionary words, common phrases, or personal information like your name or birthdate, as these can be easily guessed by hackers using brute-force attacks. Instead, use a combination of random characters that are difficult to guess.

    Additionally, it’s recommended that users use a unique password for each account they have, rather than reusing the same password across multiple accounts. This can help prevent a single compromised password from giving hackers access to multiple accounts.

    For users who find it difficult to remember multiple strong passwords, password managers can be a helpful tool. Password managers generate and store strong passwords for each account, so users don’t have to remember them all. Additionally, many password managers include features like two-factor authentication and password auditing, which can further improve cPanel security.

    2. Enable Two-Factor Authentication
    Two-factor authentication (2FA) is an extra layer of security that requires users to provide two forms of authentication in order to access an account. Typically, this involves entering a username and password (the first factor), and then providing a second form of authentication, such as a security code sent to a mobile device or email (the second factor).

    By enabling 2FA in cPanel, users can add an extra layer of security to their accounts, making it more difficult for hackers to gain access to their cPanel environment, even if they have obtained the user’s password through a data breach or other means.

    To enable 2FA in cPanel, users can follow these steps:

    1. Log in to WHM panel
    2. Click on the “Two-Factor Authentication” icon under the “Security Center” section
    3. Follow the prompts to set up 2FA using one of the available methods, such as Google Authenticator or Microsoft authenticator.

    cPanel provides detailed documentation on how to enable 2FA for cPanel accounts, which can be found here: https://docs.cpanel.net/whm/security-center/two-factor-authentication-for-whm/

    By enabling 2FA, users can add an extra layer of security to their cPanel accounts, helping to protect their websites and data from unauthorized access.

    3. Keep cPanel Up-to-Date

    Keeping cPanel up-to-date with the latest security patches and fixes is essential for maintaining the security of your cPanel environment. As new vulnerabilities are discovered, cPanel releases updates that address these issues, making it more difficult for hackers to exploit these vulnerabilities to gain access to your cPanel account.

    To update cPanel, users can follow these steps:

    1. Log in to WHM (Web Host Manager)
    2. Click on the “cPanel” button under the “Account Information” section
    3. Click on the “Upgrade to Latest Version” button
    4. Follow the prompts to update cPanel to the latest version.

    It’s important to test updates before deploying them to production to ensure that they do not cause any compatibility issues or other problems that could negatively impact your website or data.

    4. Secure SSH
    SSH (Secure Shell) is a network protocol that allows users to securely connect to a remote server. In cPanel, SSH can be accessed through the Terminal feature. It’s important to secure SSH to prevent unauthorized access and protect your server from potential attacks.

    Here are some best practices for securing SSH in cPanel:

    Use strong SSH passwords: As with all passwords, it’s essential to use strong, complex passwords for SSH. Use a mix of uppercase and lowercase letters, numbers, and symbols. Avoid using easily guessable passwords such as “password” or “123456.”

    Use SSH keys: SSH keys are a more secure way to authenticate than passwords. They use public-key cryptography to authenticate users and are not vulnerable to brute-force attacks. Consider using SSH keys instead of passwords for SSH authentication.

    Change the default SSH port: By default, SSH uses port 22. Changing the default port to a non-standard port can make it harder for attackers to find your server and attempt to gain unauthorized access. Choose a high port number between 1024 and 65535.

    Disable root login: By default, the root user is allowed to log in via SSH. However, this can be a security risk as attackers often target the root user. Consider disabling root login and using a separate, non-root user for SSH access.

    5. Control access to services by IP Address

    One of the best ways to improve cPanel security is to limit access to it only to those who need it. Unauthorized access can compromise your website and put sensitive data at risk. One effective method to limit access is by using WHM’s Host Access Control interface.

    WHM’s Host Access Control interface is a front-end tool that allows you to configure the /etc/hosts.deny and /etc/hosts.allow files. These files are used by the TCP wrappers facility to restrict access to services such as cPanel, WHM, SSH, FTP, SMTP, and more.

    Using the Host Access Control interface, you can easily add or remove IP addresses or ranges that are allowed or denied access to cPanel and other services. This provides an additional layer of security for your server by preventing unauthorized access attempts from specific IP addresses.

    To access the Host Access Control interface, log in to WHM and navigate to the “Security Center” section. From there, click on “Host Access Control.” You can then configure the settings according to your needs.

    By taking advantage of WHM’s Host Access Control interface, you can ensure that only authorized users are allowed access to cPanel and other services on your server, significantly reducing the risk of unauthorized access and potential security breaches.

    You can find some examples on how to configure Host Access control on the below document
    https://docs.cpanel.net/whm/security-center/host-access-control/

    6. Use strong Firewall
    A firewall is a network security tool that monitors and controls incoming and outgoing network traffic based on predetermined security rules. It acts as a barrier between your server and the outside world, preventing unauthorized access and blocking malicious traffic. A firewall can also help mitigate the impact of DDoS attacks by filtering out unwanted traffic before it reaches your server.

    To implement a firewall on a cPanel server, you can use third-party software such as ConfigServer Security & Firewall (CSF) or Advanced Policy Firewall (APF). These firewall solutions are designed specifically for cPanel and offer an easy-to-use interface for managing firewall rules. They support a variety of configuration options and can be customized to suit your specific needs.

    Both CSF and APF do not support firewalld, so you may need to disable firewalld and install iptables before installing them. Once installed, you can configure firewall rules to limit access to specific ports and protocols, block known malicious IPs, and prevent unauthorized access to your server. You can also set up alerts to be notified when a security event occurs, such as when a blocked IP tries to access your server.

    While firewalld is a popular firewall solution for many Linux systems, csf and apf have some advantages that make them better suited for cPanel servers. Here are a few reasons why:

    Integration with cPanel: Both csf and apf are specifically designed to work with cPanel, meaning they integrate seamlessly with the control panel’s user interface and make it easier to manage firewall rules.

    User-friendly interface: Both csf and apf offer a simple, easy-to-use interface for managing firewall rules, making it easier for cPanel users with little or no experience in server administration to set up and manage their firewall.

    Advanced features: Both csf and apf offer advanced features such as connection rate limiting, port scanning detection, and real-time blocking, which can help to further improve server security.

    Community support: csf and apf have been around for many years and have active communities of users and developers, which means that they are well-supported and regularly updated with the latest security features and bug fixes.

    Overall, while firewalld is a good option for general Linux servers, csf and apf are more tailored to cPanel and offer advanced features and integration that make them better suited for cPanel servers. You should only installone of them.

    7. Enable Brute Force Protection
    Brute force attacks are a type of cyber attack in which an attacker attempts to gain access to a system by repeatedly guessing usernames and passwords until the correct combination is found. These attacks can be particularly harmful for cPanel servers, as they can potentially give attackers access to sensitive data and website files.

    To protect against brute force attacks, cPanel offers built-in brute force protection tools that can be enabled by the server administrator. These tools work by blocking IP addresses that repeatedly fail login attempts within a certain timeframe.

    To enable brute force protection in cPanel, follow these steps:

    1. Log in to WHM as the root user.
    2. Navigate to Home > Security Center > cPHulk Brute Force Protection.
    3. Click the “Enable” button to enable brute force protection.
    4. Configure the settings to suit your needs, such as the number of login attempts allowed before blocking an IP address and the duration of the block.

    It’s important to note that enabling brute force protection can sometimes result in false positives, such as when legitimate users mistype their passwords. To avoid these situations, consider adding IP addresses to a whitelist of trusted users who should not be blocked by the brute force protection tool.
    For more detailed instructions on how to enable and configure cPanel’s brute force protection tool, refer to the cPanel documentation below:
    https://docs.cpanel.net/whm/security-center/cphulk-brute-force-protection/

    8. Regularly Back Up Website and cPanel Data
    Regularly backing up website and cPanel data is crucial to ensuring the availability and integrity of your data. A backup is essentially a copy of your data that you can restore in case of data loss, corruption, or other unexpected events. Without a backup, you risk losing your data permanently, which can have serious consequences for your business or personal website.

    Creating an effective backup strategy involves several key considerations. Here are some tips:

    1. Choose a backup solution: cPanel comes with its own built-in backup solution that allows you to create full or partial backups of your cPanel account, including your website files, databases, email accounts, and settings. It’s essential to use a reliable backup solution that can handle your data size and is compatible with your hosting environment.

    2. Determine backup frequency: The backup frequency depends on the frequency of changes to your website and data. For example, if you make frequent changes to your website or store sensitive data, you may need to back up your data daily or weekly. You may also consider backing up before making significant changes to your website or software.

    3. Store backups in multiple locations: Storing backups in multiple locations is essential to ensure that you can restore your data in case of a disaster or outage. You can store backups locally on your server, but it’s also recommended to store backups remotely, such as in cloud storage or an offsite location.

    4. Automate backups: Manually creating backups can be time-consuming and error-prone, which is why it’s recommended to automate backups. You can use cPanel’s built-in backup solution to schedule backups automatically or use third-party backup solutions like JetBackup to create automated backups.

    For advanced backup options, you may consider using JetBackup, which offers features like incremental backups, remote backups, and backup retention policies. JetBackup is an excellent option for those who require more customization and configuration options than what is available with cPanel’s built-in backup system. Their FAQ is a useful resource for anyone looking to learn more about JetBackup’s features and capabilities.
    https://docs.jetbackup.com/manual/whm/FAQ/FAQ.html

    By implementing an effective backup strategy, you can ensure the availability and integrity of your data, and quickly restore your website and cPanel account in case of a disaster or data loss event.

    9. Secure Apache
    Securing Apache on cPanel is an essential step in protecting your website and data. Here are some ways to do it:

    Use ModSecurity: ModSecurity is an open-source web application firewall that can help protect your website from a wide range of attacks. It can also help block malicious traffic before it reaches your server. WHM’s ModSecurity® Vendors interface allows you to install the (OWASP) Core Rule Set (CRS), which is a set of rules designed to protect against common web application attacks.

    Use suEXEC module: suEXEC is a module that allows scripts to be executed under their own user ID instead of the default Apache user. This provides an additional layer of security by limiting the impact of a compromised script to the user’s home directory instead of the entire server.

    Implement symlink race condition protection: Symlink race condition vulnerabilities can allow attackers to gain access to files that they should not have access to. Implementing symlink race condition protection helps prevent these vulnerabilities by denying access to files and directories that have weak permissions.

    Implementing these measures can help secure Apache on cPanel and protect your website and data from potential security breaches.

    10. Disable unused services and daemons
    Disabling unused services and daemons is an important step in ensuring the security of your cPanel server. Any service or daemon that allows connections to your server may also allow hackers to gain access, so disabling them can greatly reduce the risk of a security breach.
    To disable unused services and daemons in cPanel, you can use the Service Manager interface in WHM. This interface allows you to view a list of all the services and daemons running on your server and disable the ones that you do not need.

    To access the Service Manager interface, log in to WHM and navigate to Home » Service Configuration » Service Manager. Here, you will see a list of all the services and daemons running on your server, along with their status (either Enabled or Disabled).

    To disable a service or daemon, simply click the Disable button next to its name. You can also use the checkboxes at the top of the page to select multiple services or daemons and disable them all at once.

    11. Monitor your system
    It is important to regularly monitor your server and review logs to ensure that everything is functioning as expected and to quickly identify any potential security threats. You can set up alerts and notifications to stay informed about any issues that arise.

    To effectively monitor your system, you can use various tools and software solutions. Some popular ones include:

    Tripwire: This tool monitors checksums of files and reports changes. It can be used to detect unauthorized changes to critical system files.
    Chkrootkit: This tool scans for common vulnerabilities and rootkits that can be used to gain unauthorized access to your system.
    Rkhunter: Similar to Chkrootkit, this tool scans for common vulnerabilities and rootkits, and can help detect potential security threats.
    Logwatch: This tool monitors and reports on daily system activity, including any unusual or suspicious events that may require further investigation.
    ConfigServer eXploit Scanner: This tool scans your system for potential vulnerabilities and provides detailed reports on any security issues found.
    ImunifyAV: This is a popular antivirus solution for cPanel servers, which can scan your system for malware and other security threats.
    Linux Malware Detect: This is another popular malware scanner for Linux servers, which can detect and remove malicious files.

    12. Use SSL Certificates whenever possible
    SSL certificates are digital certificates that provide secure communication between a website and its visitors by encrypting the data transmitted between them. They help protect against eavesdropping and data theft by making sure that the data being exchanged is not intercepted and read by any third party.

    To obtain and install an SSL certificate in cPanel, you can either purchase one from a trusted certificate authority or use free SSL provider. To install a certificate, you’ll need to generate a certificate signing request (CSR) and then use it to obtain the SSL certificate. Once you have the certificate, you can install it through cPanel’s SSL/TLS Manager interface.

    One way to obtain a free SSL certificate is through cPanel’s AutoSSL feature, which can automatically provision and renew SSL certificates for domains hosted on the server. Let’s Encrypt and Sectigo are two SSL providers that are supported by AutoSSL.

    Enforcing and using SSL for cPanel services, like webmail and cPanel itself, is also important for security. You can require SSL for cPanel services by enabling the “Force HTTPS Redirect” option in cPanel’s “SSL/TLS” interface. Additionally, you can use the “Require SSL” option to require SSL connections for specific cPanel services, like webmail or FTP.

    Summary
    Securing your cPanel server is crucial to protect your website and data from cyber attacks. In this blog post, we discussed some best practices for cPanel security in 2023, including:

    1. Updating cPanel and its components regularly to ensure the latest security patches.
    2. Creating strong passwords and enabling two-factor authentication.
    3. Limiting access to cPanel to only those who need it and using WHM’s Host Access Control interface to restrict access.
    3. Implementing a firewall like csf or apf to protect against cyber attacks.
    4. Enabling brute force protection and regularly backing up website and cPanel data.
    5. Securing Apache with ModSecurity and suEXEC module, and disabling unused services and daemons.
    6. Monitoring your system with various tools like Tripwire, chkrootkit, Rkhunter, Logwatch, ConfigServer eXploit Scanner, ImunifyAV, and Linux Malware Detect.
    7. Using SSL certificates to encrypt data in transit, and enforcing SSL for cPanel services using the “Require SSL” feature.

    By following these best practices, you can significantly improve the security of your cPanel server and protect your website and data from cyber threats. Remember, security is an ongoing process, so it’s essential to stay vigilant and regularly monitor your system for any vulnerabilities or suspicious activity.

    Page 1 of 7

    Powered by WordPress & Theme by Anders Norén