EXTREMELY IMPORTANT: Please attach a ZIP file containing your Akeeba Backup log file in order for us to help you with any backup or restoration issue. If the file is over 10MiB, please upload it on your server and post a link to it.
Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.
Latest post by nicholas on Thursday, 03 October 2024 01:48 CDT
EXTREMELY IMPORTANT: Please attach a ZIP file containing your Akeeba Backup log file in order for us to help you with any backup or restoration issue. If the file is over 10MiB, please upload it on your server and post a link to it.
Estoy teniendo el problema de que cuando quiero hacer Backup por medio de ghcr.io/akeeba/remotecli me esta saliendo un error Error #0 - SSL certificate problem: self-signed certificate, ya lo probé tanto en modo http como curl. Existe alguna excepción como wget --no-check-certificate utilizando la imagen ghcr.io/akeeba/remotecli
Gracias
Do remember that ghcr.io is operated by Microsoft, it's part of GitHub itself. We have no control over it, let alone its TLS certificates.
That said, something is definitely wrong with your machine. Please see https://www.ssllabs.com/ssltest/analyze.html?d=ghcr.io . As you see, the TLS certificate of ghcr.io is valid, signed by Sectigo, and expires in June 2025. Are you sure your machine is not simply missing the CA root cache?
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!
Thanks for the response. The issue I'm having is trying to make a remote backup of a site that's behind Traefik. The problem is that this model remotecli performs certificate validation, and Traefik has a default one. I copied the certificate, but haven't been able to get positive results. Here's the command I used...
docker run --rm \
-v /home/xxxxxxx/pruebaAkeeba:/Downloads \
-e http_proxy='http://xx.xx.xx.xx:8080' \
-e https_proxy='http://xx.xx.xx.xx:8080' \
-e no_proxy='localhost,127.0.0.1,10.*' \
--network host \
ghcr.io/akeeba/remotecli backup --profile=1 \
--host="https://serviciosti1.xxx.xxx.ar/index.php?option=com_akeebabackup&view=Api&format=raw" \
--certificate=/Downloads/cert.pem \
--secret="secretpassword" --dlmode=http --dlpath="./Downloads" --delete --debug
and debug
Error #0 - SSL: no alternative certificate subject name matches target host name 'serviciosti1.xxx.xxx.ar'
Stack Trace (for debugging):
#0 /opt/remotecli/vendor/joomla/http/src/Http.php(314): Joomla\Http\Transport\Curl->request('GET', Object(Joomla\Uri\Uri), NULL, Array, NULL, 'AkeebaRemoteCLI...')
#1 /opt/remotecli/vendor/joomla/http/src/Http.php(152): Joomla\Http\Http->makeTransportRequest('GET', Object(Joomla\Uri\Uri), NULL, Array, NULL)
#2 /opt/remotecli/vendor/akeeba/json-backup-api/src/HttpAbstraction/HttpClientJoomla.php(97): Joomla\Http\Http->get('https://servici...')
#3 /opt/remotecli/vendor/akeeba/json-backup-api/src/HttpAbstraction/AbstractHttpClient.php(41): Akeeba\BackupJsonApi\HttpAbstraction\HttpClientJoomla->getRawResponse('GET', 'getVersion', Array)
#4 /opt/remotecli/vendor/akeeba/json-backup-api/src/HighLevel/Autodetect.php(65): Akeeba\BackupJsonApi\HttpAbstraction\AbstractHttpClient->doQuery('getVersion')
#5 [internal function]: Akeeba\BackupJsonApi\HighLevel\Autodetect->__invoke()
#6 /opt/remotecli/vendor/akeeba/json-backup-api/src/Connector.php(65): call_user_func(Object(Akeeba\BackupJsonApi\HighLevel\Autodetect))
#7 /opt/remotecli/Application/Command/AbstractCommand.php(92): Akeeba\BackupJsonApi\Connector->__call('autodetect', Array)
#8 /opt/remotecli/Application/Command/Backup.php(33): Akeeba\RemoteCLI\Application\Command\AbstractCommand->getApiObject()
#9 /opt/remotecli/Application/Kernel/Dispatcher.php(102): Akeeba\RemoteCLI\Application\Command\Backup->execute()
#10 /opt/remotecli/remote.php(125): Akeeba\RemoteCLI\Application\Kernel\Dispatcher->dispatch()
#11 {main}
That is why my question of whether there was an option not to validate certificates
Thanks
SSL: no alternative certificate subject name matches target host name 'serviciosti1.xxx.xxx.ar'
Your problem is that the certificate does NOT match the hostname serviciosti1.xxx.xxx.ar. You cannot "accept" an invalid certificate.
Fix your site. Let's Encrypt certificates are FREE and have been free since 2014. There's no reason to have a live host with an invalid, self-signed, default, not-meant-for-production certificate anymore.
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!
Thank you for your help, Nicholas, but let me give you some context.
We belong to an organization where the cybersecurity policy prohibits sharing the organization’s valid certificates and requires that all internal communication must be over HTTPS with self-signed certificates. We do not have control over DNS or Challenge to use Let's Encrypt, and as I mentioned before, we cannot access the organization’s certificates. The generation of backups must be done internally with the restrictions imposed by cybersecurity.
An example of what I'm trying to say would be something like:wget --no-check-certificate https://xxx.xxx
That is why my question was related to whether there is any way to enable HTTPS communication while bypassing certificate validation.
Many thanks.
Cybersecurity enforcing bad TLS certificates which can only be used... by ignoring them? What else do they do at this company? Security doors with no locks? Fire extinguishers filled with gasoline?
Your problem is NOT that your certificate is self-signed. You can pass the expected certificate, or its signing authority's public certificate, with the --certificate parameter. There is a valid use case for this, and it's fully supported.
Let me explain this by first telling you how an internal, airgapped setup we actually do use for testing works. We use the hostname test.local.web. This hostname is mapped to 127.0.0.1 and ::1 using /etc/hosts
on our Linux server. I have created our own CA, which signs an intermediate root, which signs the certificate for test.local.web. I pass the public key for the root CA in the --certificate parameter. The cURL library (what PHP actually uses to communicate with the web server) sees the self-signed certificate for test.local.web and confirms that a. its hostname matches the hostname cURL was asked to connect to and b. the certificate is validly signed by the intermediate CA which is validly signed by the root CA which is part of the certificate authority cache we told cURL to use (and which includes our custom certificate). There are many more checks performed (e.g. date checks, integrity checks etc), but these two are the ones which are pertinent to our discussion.
Your problem is that step in bold type: your TLS certificate's Common Name does not correspond to the hostname you are asking cURL to connect to. In simple terms, your TLS certificate claims to be for a different domain name.
But, why does cURL (and wget, and every other HTTPS library) do both of these checks? Why not just check if the TLS certificate is signed? Why check the Common Name at all? No, it's not a conspiracy to sell more certificates. To understand the reasoning we have to step back and talk about HTTPS.
The whole point of HTTP over TLS (HTTPS) is to prevent Man-In-The-Middle (MITM) attacks. HTTPS can help with that because the two ends – the client (you) and server (site) – can mutually authenticate each other over a tamper-evident protocol. The tamper-evident property is crucial. The lynchpin of the whole process is the server's TLS certificate. This lets the client verify that it's talking to the intended server and, as a result, trust the rest of the information exchange between them.
An evil attacker sitting between the client (you) and the server cannot break HTTPS without making it obvious; that's the tamper-evident property of the protocol. At best, the attacker in the middle would be establishing an HTTPS connection to the client, and a separate one to the server, relaying data between these two connections. But since the attacker is sitting at the end of each encrypted connection, they can record or manipulate this data. This would be obvious to the client because the TLS certificate used in the HTTPS connection to the man-in-the-middle would a. have a Common Name not matching the actual site we're connecting to and/or b. not have a valid signature from a Certification Authority the client trusts. As you may have noted, these are the two things checked by cURL and every other HTTPS library.
So, what does --no-check-certificates do?
In short, it removes both of these checks (plus the other checks I didn't include since they are irrelevant to our discussion). Since the library checks neither the Common Name, nor the signature of the certificate a MITM attacker would be able to operate undiscovered. In simple terms, this option breaks HTTPS, offering exactly ZERO security whatsoever, making the connection equivalent to plain old HTTP as far as MITM attacks go. The only "protection" it offers is against naive network packet scanning. Basically, it's this: Worst Security Security Check (tenor.com)
Why does wget offer this option?
wget is a generic tool. It has to support a lot more use cases than our software. There are many use cases where you want to establish whether the problem is the certificate or the server. Using this option you will know. If the connection is made with this option, the problem is the certificate. This is a great troubleshooting first step. It's certainly easier to use than openssl, which is the second step if the problem was found to indeed be the certificate.
Why are we not offering this option in our software?
Because there is no practical use case where it makes sense to do that instead of using plain old HTTP. If you have an issue with your certificate's signature (but the Common Name is correct) we do give you the --certificate option to work around it; that's a valid use case, as illustrated by our air-gapped test setup described above.
If, however, your certificate's Common Name does not even match your site's domain name you are in a situation where there is zero security, therefore you might just as well use plain old HTTP instead of HTTPS with ignoring certificate validity altogether! In both cases, the connection would be equally insecure. The only difference is that HTTPS with --no-check-certificates gives you an illusion of security (even though there is none to be found), whereas using HTTP will raise your heckles as you instinctively know it's insecure.
Now, about the organisation.
I find it hard to believe that the cybersecurity department would enforce using TLS certificates with the wrong Common Name. If they had even a junior sysadmin with a passing interest in security amongst them they'd immediately know that the only way to use this kind of TLS certificate is by ignoring it, which is worse than using plain old HTTP. If they are unaware of this fact, they need to be fired immediately.
Worse than providing no security, worse than providing a false sense of security, their enforced rule is a huge liability for the company they work in! There is no insurer in the world who will take a quick glance at this and pay out a cybersecurity insurance when (not if) they get hacked or ransomwared as this policy runs afoul of the contractual requirement to take preventative security measures. The same thing goes for any cybersecurity assessment. If they implemented this policy to tick the "Use HTTPS everywhere" box in a self-assessment cybersecurity form, then the company has committed perjury and the self-assessment is invalid! They are putting their company in huge financial and legal risk by being, not to put too fine a point on it, utterly incompetent.
As I said in the beginning of this reply, what they do is like having security doors without locks, or fire extinguishers filled with gasoline. Sure, the security theater is there, but there's no real security. It's more than certain that in case of an actual emergency the bad security practices would cause more harm than good.
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!
Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.
Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!