Hi @etianen,
A pleasure to talk to you. I used django-reversion in the past and now I just found django-s3-storage very useful for my new project. I think you are a great developer and I am very happy to submit an issue here.
I use to set up my projects for local development using docker-compose, making them as close to production as possible. I will be using MinIO (S3 compatible), so basically it looks like this (simplified):
minio:
image: minio/minio
volumes:
- minio_data:/data
environment:
- MINIO_ROOT_USER=minio
- MINIO_ROOT_PASSWORD=minio123
command: server /data
ports:
- "9000:9000"
django:
build: .
environment:
- STORAGE_USER=minio
- STORAGE_PASSWORD=minio123
- STORAGE_INTERNAL_URL=http://minio:9000
- STORAGE_EXTERNAL_URL=http://localhost:9000
depends_on:
- minio
Note, that in a setup like this, the docker services communicate through their own private network with its own DNS resolution, which is not accessible from the host. So django can access minio at http://minio:9000, however from the host, it has to be accessed through http://localhost:9000.
I thought, well, this is fine. I can just set AWS_S3_ENDPOINT_URL to http://minio:9000 and AWS_S3_PUBLIC_URL to http://localhost:9000. The problem here, is that setting AWS_S3_PUBLIC_URL skips completely the generation of pre-signed urls, so right now it is not possible to use private buckets in docker-compose set ups.
I wonder if this should be expected behavior. If not, I can submit a PR to fix it. Your call!
For now, I am working around this limitation by creating a custom storage myself by injecting this mixin in any of the storage backends provided by this project. It is ugly but it does the job. Note that I set my own AWS_S3_PUBLIC_BASE_URL setting instead of AWS_S3_PUBLIC_URL:
class PublicUrlMixin:
base_url = urlparse(settings.AWS_S3_PUBLIC_BASE_URL)
def url(self, *args, **kwargs):
return (
urlparse(super().url(*args, **kwargs))
._replace(scheme=self.base_url.scheme, netloc=self.base_url.netloc)
.geturl()
)
Hi @etianen,
A pleasure to talk to you. I used
django-reversionin the past and now I just founddjango-s3-storagevery useful for my new project. I think you are a great developer and I am very happy to submit an issue here.I use to set up my projects for local development using
docker-compose, making them as close to production as possible. I will be using MinIO (S3 compatible), so basically it looks like this (simplified):Note, that in a setup like this, the docker services communicate through their own private network with its own DNS resolution, which is not accessible from the host. So
djangocan accessminioathttp://minio:9000, however from the host, it has to be accessed throughhttp://localhost:9000.I thought, well, this is fine. I can just set
AWS_S3_ENDPOINT_URLtohttp://minio:9000andAWS_S3_PUBLIC_URLtohttp://localhost:9000. The problem here, is that settingAWS_S3_PUBLIC_URLskips completely the generation of pre-signed urls, so right now it is not possible to use private buckets indocker-composeset ups.I wonder if this should be expected behavior. If not, I can submit a PR to fix it. Your call!
For now, I am working around this limitation by creating a custom storage myself by injecting this mixin in any of the storage backends provided by this project. It is ugly but it does the job. Note that I set my own
AWS_S3_PUBLIC_BASE_URLsetting instead ofAWS_S3_PUBLIC_URL: