我使用的示例是:https://clubhouse.io/developer-how-to/how-to-set-up-a-hadoop-cluster-in-docker/
我首先用:docker-compose up-d开始HDFS
然后我用debezium网站上的图片启动了动物园管理员Kafka和mysql。https://debezium.io/documentation/reference/1.0/tutorial.html
docker run-it--rm--name zookeeper--network docker-hadoop-master_default-p 2181:2181-p 2888:2888-p 3888:3888 debezium/zookeeper:1.0
docker run-it--rm--name kafka--network docker-hadoop-master_default-e zookeeper_connect=zookeeper-p 9092:9092--link zookeeper:zookeeper debezium/kafka:1.0
docker run-it--rm--name mysql--network docker-hadoop-master_default-p 3306:3306-e mysql_root_password=debezium-e mysql_user=mysqluser-e mysql_password=mysqlpw debezium/示例-mysql:1.0
我在这些运行中使用网络,是因为当我试图在docker-compose.yml上从HDFS更改网络时,资源管理器关闭了,无论如何我都找不到如何重新启动并使其稳定。所以直接在这些容器上添加了zookeeper、kafka和MySQL。
然后,这是最棘手的部分,Kafka连接,我用同样的网络在这个案子上,这是有道理的。
docker run-it--rm--name connect--network docker-hadoop-master_default-p 8083:8083-e group_id=1-e config_storage_topic=my_connect_configs-e offset_storage_topic=my_connect_offsets-e status_storage_topic=my_connect_statuss-e bootstrap_servers=“172.18.0.10:9092”-e core_conf_fs_defaultfs=hdfs://172.18.0.2:9000--link namenode:namenode-link zookeeper:zookeeper-link mysql:mysql debezium/connect:1.0
为了将源代码(Mysql)链接到Kafka,我使用了debezium教程中的连接器,如下所示。
curl-i-x POST-h“accept:application/json”-h“content-type:application/json”localhost:8083/connectors/-d'{“name”:“inventory-connector”,“config”:{“connector.class”:“io.debezium.connector.mysql.mysqlconnector”,“tasks.max”:“1”,“database.hostname”:“mysql”,“database.port”:“3306”,“database.user”:“debezium”,“database.password”:“dbz”,“database.server.id”:“184054”,“database.server.name”:“”:“inventory”,“database.history.kafka.bootstrap.servers”:“kafka:9092”,“database.history.kafka.topic”:“dbhistory.inventory”}}“
我测试了Kafka是否从源接收到任何事件并正常工作。
设置好后,我开始安装插件,从confluent web站点下载并粘贴到本地机器Linux上,然后安装Confluent-Hub,然后在本地机器上安装插件。然后我创建了用户kafka,并将插件目录中的所有内容更改为kafka:kafka。
在完成所有这些之后,我使用docker cp:/Kafka/connect复制到Kafka Connect。
您需要查看以下内容:[{“class”:“io.confluent.connect.hdfs.hdfssinkconnector”,“type”:“sink”,“version”:“5.4.0”},…
在这一步之后,我认为我的问题就出在这里了:curl-I-x post-h“accept:application/json”-h“content-type:application/json”localhost:8083/connectors/-d“{”name“:”hdfs-sink“,”config“:{”connector.class“:”io.confluent.connector“,”tasks.max“:1,”topics“:”dbserver1,dbserver1.inventory.products,dbserver1.inventory.products,dbserver1.inventory.products,dbserver1.inventory.customers,dbserver1.inventory.orders,dbserver1.inventory.geom:“hdfs:/172.18.0.2:9000”,“flush.size”:3,“logs.dir”:“logs”,“topics.dir”:“kafka”,“format.class”:“io.confluent.connect.hdfs.Parquet.ParquetFormat”,“partitioner.class”:“io.confluent.connect.hdfs.partitioner.defaultpartitioner”,“partition.field.name”:“day”}}“
我不知道如何让Kafka Connect相信我需要namenode的特定IP地址,他只是保留我发现不同IP的trowing消息,当预期的是hdfs://namenode:9000时
还将这个-e core_conf_fs_defaultfs=hdfs://172.18.0.2:9000添加到docker中,在Kafka Connect中运行我们的设置,当我发布hdfs-sink的Curl时,他会给我下面的消息。
来自Kafka Connect的日志:
2020-01-21 15:22:09,597 INFO || Creating connector hdfs-sink of type io.confluent.connect.hdfs.HdfsSinkConnector [org.apache.kafka.connect.runtime.Worker]
2020-01-21 15:22:09,597 INFO || Instantiated connector hdfs-sink with version 5.4.0 of type class io.confluent.connect.hdfs.HdfsSinkConnector [org.apache.kafka.connect.runtime.Worker]
2020-01-21 15:22:09,598 INFO || HdfsSinkConnectorConfig values:
avro.codec = null
connect.hdfs.keytab =
connect.hdfs.principal =
connect.meta.data = true
enhanced.avro.schema.support = false
filename.offset.zero.pad.width = 10
flush.size = 3
format.class = class io.confluent.connect.hdfs.parquet.ParquetFormat
hadoop.conf.dir =
hadoop.home =
hdfs.authentication.kerberos = false
hdfs.namenode.principal =
hdfs.url = hdfs://172.18.0.2:9000
kerberos.ticket.renew.period.ms = 3600000
logs.dir = logs
retry.backoff.ms = 5000
rotate.interval.ms = -1
rotate.schedule.interval.ms = -1
schema.cache.size = 1000
schema.compatibility = NONE
shutdown.timeout.ms = 3000
[io.confluent.connect.hdfs.HdfsSinkConnectorConfig]
2020-01-21 15:22:09,599 INFO || StorageCommonConfig values:
directory.delim = /
file.delim = +
storage.class = class io.confluent.connect.hdfs.storage.HdfsStorage
store.url = null
topics.dir = kafka
[io.confluent.connect.storage.common.StorageCommonConfig]
2020-01-21 15:22:09,599 INFO || HiveConfig values:
hive.conf.dir =
hive.database = default
hive.home =
hive.integration = false
hive.metastore.uris =
[io.confluent.connect.storage.hive.HiveConfig]
2020-01-21 15:22:09,600 INFO || PartitionerConfig values:
locale =
partition.duration.ms = -1
partition.field.name = [day]
partitioner.class = class io.confluent.connect.hdfs.partitioner.DefaultPartitioner
path.format =
timestamp.extractor = Wallclock
timestamp.field = timestamp
timezone =
[io.confluent.connect.storage.partitioner.PartitionerConfig]
2020-01-21 15:22:09,601 INFO || Finished creating connector hdfs-sink [org.apache.kafka.connect.runtime.Worker]
2020-01-21 15:22:09,601 INFO || SinkConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.connect.hdfs.HdfsSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = hdfs-sink
tasks.max = 1
topics = [dbserver1, dbserver1.inventory.products, dbserver1.inventory.products_on_hand, dbserver1.inventory.customers, dbserver1.inventory.orders, dbserver1.inventory.geom, dbserver1.inventory.addresses]
topics.regex =
transforms = []
value.converter = null
[org.apache.kafka.connect.runtime.SinkConnectorConfig]
2020-01-21 15:22:09,602 INFO || EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.connect.hdfs.HdfsSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = hdfs-sink
tasks.max = 1
topics = [dbserver1, dbserver1.inventory.products, dbserver1.inventory.products_on_hand, dbserver1.inventory.customers, dbserver1.inventory.orders, dbserver1.inventory.geom, dbserver1.inventory.addresses]
topics.regex =
transforms = []
value.converter = null
[org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig]
2020-01-21 15:22:09,604 INFO || [Worker clientId=connect-1, groupId=1] Starting task hdfs-sink-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
2020-01-21 15:22:09,605 INFO || Creating task hdfs-sink-0 [org.apache.kafka.connect.runtime.Worker]
2020-01-21 15:22:09,606 INFO || ConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.connect.hdfs.HdfsSinkConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = hdfs-sink
tasks.max = 1
transforms = []
value.converter = null
[org.apache.kafka.connect.runtime.ConnectorConfig]
2020-01-21 15:22:09,607 INFO || EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.connect.hdfs.HdfsSinkConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = hdfs-sink
tasks.max = 1
transforms = []
value.converter = null
[org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig]
2020-01-21 15:22:09,608 INFO || TaskConfig values:
task.class = class io.confluent.connect.hdfs.HdfsSinkTask
[org.apache.kafka.connect.runtime.TaskConfig]
2020-01-21 15:22:09,608 INFO || Instantiated task hdfs-sink-0 with version 5.4.0 of type io.confluent.connect.hdfs.HdfsSinkTask [org.apache.kafka.connect.runtime.Worker]
2020-01-21 15:22:09,609 INFO || JsonConverterConfig values:
converter.type = key
decimal.format = BASE64
schemas.cache.size = 1000
schemas.enable = true
[org.apache.kafka.connect.json.JsonConverterConfig]
2020-01-21 15:22:09,610 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task hdfs-sink-0 using the worker config [org.apache.kafka.connect.runtime.Worker]
2020-01-21 15:22:09,610 INFO || JsonConverterConfig values:
converter.type = value
decimal.format = BASE64
schemas.cache.size = 1000
schemas.enable = true
[org.apache.kafka.connect.json.JsonConverterConfig]
2020-01-21 15:22:09,611 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task hdfs-sink-0 using the worker config [org.apache.kafka.connect.runtime.Worker]
2020-01-21 15:22:09,611 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task hdfs-sink-0 using the worker config [org.apache.kafka.connect.runtime.Worker]
2020-01-21 15:22:09,613 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker]
2020-01-21 15:22:09,614 INFO || SinkConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.connect.hdfs.HdfsSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = hdfs-sink
tasks.max = 1
topics = [dbserver1, dbserver1.inventory.products, dbserver1.inventory.products_on_hand, dbserver1.inventory.customers, dbserver1.inventory.orders, dbserver1.inventory.geom, dbserver1.inventory.addresses]
topics.regex =
transforms = []
value.converter = null
[org.apache.kafka.connect.runtime.SinkConnectorConfig]
2020-01-21 15:22:09,618 INFO || EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.connect.hdfs.HdfsSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = hdfs-sink
tasks.max = 1
topics = [dbserver1, dbserver1.inventory.products, dbserver1.inventory.products_on_hand, dbserver1.inventory.customers, dbserver1.inventory.orders, dbserver1.inventory.geom, dbserver1.inventory.addresses]
topics.regex =
transforms = []
value.converter = null
[org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig]
2020-01-21 15:22:09,622 INFO || ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [172.18.0.10:9092]
check.crcs = true
client.dns.lookup = default
client.id = connector-consumer-hdfs-sink-0
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = connect-hdfs-sink
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
[org.apache.kafka.clients.consumer.ConsumerConfig]
2020-01-21 15:22:09,653 INFO || Kafka version: 2.4.0 [org.apache.kafka.common.utils.AppInfoParser]
2020-01-21 15:22:09,653 INFO || Kafka commitId: 77a89fcf8d7fa018 [org.apache.kafka.common.utils.AppInfoParser]
2020-01-21 15:22:09,654 INFO || Kafka startTimeMs: 1579620129652 [org.apache.kafka.common.utils.AppInfoParser]
2020-01-21 15:22:09,659 INFO || [Worker clientId=connect-1, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
2020-01-21 15:22:09,677 INFO || [Consumer clientId=connector-consumer-hdfs-sink-0, groupId=connect-hdfs-sink] Subscribed to topic(s): dbserver1, dbserver1.inventory.products, dbserver1.inventory.products_on_hand, dbserver1.inventory.customers, dbserver1.inventory.orders, dbserver1.inventory.geom, dbserver1.inventory.addresses [org.apache.kafka.clients.consumer.KafkaConsumer]
2020-01-21 15:22:09,678 INFO || HdfsSinkConnectorConfig values:
avro.codec = null
connect.hdfs.keytab =
connect.hdfs.principal =
connect.meta.data = true
enhanced.avro.schema.support = false
filename.offset.zero.pad.width = 10
flush.size = 3
format.class = class io.confluent.connect.hdfs.parquet.ParquetFormat
hadoop.conf.dir =
hadoop.home =
hdfs.authentication.kerberos = false
hdfs.namenode.principal =
hdfs.url = hdfs://172.18.0.2:9000
kerberos.ticket.renew.period.ms = 3600000
logs.dir = logs
retry.backoff.ms = 5000
rotate.interval.ms = -1
rotate.schedule.interval.ms = -1
schema.cache.size = 1000
schema.compatibility = NONE
shutdown.timeout.ms = 3000
[io.confluent.connect.hdfs.HdfsSinkConnectorConfig]
2020-01-21 15:22:09,679 INFO || StorageCommonConfig values:
directory.delim = /
file.delim = +
storage.class = class io.confluent.connect.hdfs.storage.HdfsStorage
store.url = null
topics.dir = kafka
[io.confluent.connect.storage.common.StorageCommonConfig]
2020-01-21 15:22:09,679 INFO || HiveConfig values:
hive.conf.dir =
hive.database = default
hive.home =
hive.integration = false
hive.metastore.uris =
[io.confluent.connect.storage.hive.HiveConfig]
2020-01-21 15:22:09,680 INFO || PartitionerConfig values:
locale =
partition.duration.ms = -1
partition.field.name = [day]
partitioner.class = class io.confluent.connect.hdfs.partitioner.DefaultPartitioner
path.format =
timestamp.extractor = Wallclock
timestamp.field = timestamp
timezone =
[io.confluent.connect.storage.partitioner.PartitionerConfig]
2020-01-21 15:22:09,681 INFO || AvroDataConfig values:
connect.meta.data = true
enhanced.avro.schema.support = false
schemas.cache.config = 1000
[io.confluent.connect.avro.AvroDataConfig]
2020-01-21 15:22:09,681 INFO || Hadoop configuration directory [io.confluent.connect.hdfs.DataWriter]
2020-01-21 15:22:09,757 ERROR || WorkerSinkTask{id=hdfs-sink-0} Task threw an uncaught and unrecoverable exception [org.apache.kafka.connect.runtime.WorkerTask]
java.lang.IllegalArgumentException: java.net.URISyntaxException: Illegal character in hostname at index 36: hdfs://namenode.docker-hadoop-master_default:9000
at org.apache.hadoop.net.NetUtils.getCanonicalUri(NetUtils.java:274)
at org.apache.hadoop.hdfs.DistributedFileSystem.canonicalizeUri(DistributedFileSystem.java:1577)
at org.apache.hadoop.fs.FileSystem.getCanonicalUri(FileSystem.java:235)
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:623)
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
at io.confluent.connect.hdfs.storage.HdfsStorage.exists(HdfsStorage.java:149)
at io.confluent.connect.hdfs.DataWriter.createDir(DataWriter.java:548)
at io.confluent.connect.hdfs.DataWriter.<init>(DataWriter.java:222)
at io.confluent.connect.hdfs.DataWriter.<init>(DataWriter.java:102)
at io.confluent.connect.hdfs.HdfsSinkTask.start(HdfsSinkTask.java:84)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:301)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.net.URISyntaxException: Illegal character in hostname at index 36: hdfs://namenode.docker-hadoop-master_default:9000
at java.base/java.net.URI$Parser.fail(URI.java:2913)
at java.base/java.net.URI$Parser.parseHostname(URI.java:3448)
at java.base/java.net.URI$Parser.parseServer(URI.java:3297)
at java.base/java.net.URI$Parser.parseAuthority(URI.java:3216)
at java.base/java.net.URI$Parser.parseHierarchical(URI.java:3158)
at java.base/java.net.URI$Parser.parse(URI.java:3114)
at java.base/java.net.URI.<init>(URI.java:685)
at org.apache.hadoop.net.NetUtils.getCanonicalUri(NetUtils.java:272)
... 24 more
2020-01-21 15:22:09,759 ERROR || WorkerSinkTask{id=hdfs-sink-0} Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask]
默认情况下,Docker compose会添加下划线,运行命令的目录不允许在主机名中添加下划线。默认情况下,Hadoop在hdfs-site.xml
配置文件中首选主机名。
我不知道如何让Kafka Connect相信我需要namenode的特定IP地址,他只是保留我发现不同IP的trowing消息,当预期的是hdfs://namenode:9000时
理想情况下,您不会在Docker中使用IP,而是使用服务名和公开端口。
问题内容: 我已经在Docker上进行了几天的实验,并逐渐喜欢上它。但是,有些事情仍然使我难以理解。到目前为止,这是我所拥有的 创建低占用空间的Ubuntu 14.04映像 为Apache映像创建Dockerfile 为PHP5创建Dockerfile 为memcached创建Dockerfile并构建映像 用memcached启动Docker容器 用Apache启动Docker容器并将其链接到之
问题内容: 到目前为止,我已经成功地使用Mesos,Marathon和Docker来管理服务器群以及我放置在其上的容器。但是,我现在想走得更远,开始做一些事情,例如自动将haproxy容器链接到启动的每个主要docker服务,或者提供其他基于链接的守护程序和容器化服务,这些服务仅对单个父容器可用。 通常,我会先使用一些名称启动帮助程序服务,然后在启动真实服务时将其链接到帮助程序,然后一切都会好起来
我正在尝试从另一个码头工人容器连接到 kafka docker 容器。但它没有连接。 为了运行kafka和zookeeper,我使用了docker compose文件: 容器docker compose 17138956372294708100 _ kafkatest . producer _ 1和docker compose 17138956372294708100 _ kafkatest .
问题内容: 我必须使用docker-compose设置一个mongo副本集。对于副本集,容器必须彼此了解。 我试过了 我收到一个循环导入消息。但是,如果删除到dbreplicasetpart1的反向链接,则无法从dbreplicasetpart2 ping到dbreplicasetpart1。解决办法是什么? 问题答案: 为Docker 1.10更新 Docker 1.10允许在compose文件
问题内容: 我正在尝试链接2个单独的容器: nginx:最新 的PHP:FPM 问题是php脚本不起作用。也许php-fpm配置不正确。这是源代码,位于我的资源库中。这是文件: 以及我用来基于nginx构建自定义图片的图片: 最后,这是我自定义的Nginx虚拟主机配置: 有人可以帮助我正确配置这些容器以执行php脚本吗? PS 我像这样通过docker-composer运行容器: 从项目根目录。
我试图用PHP、MariaDB和一个教程创建一个PHP开发环境,该教程建议使用Adminer进行数据库管理。因此,我生成了我的文件如下: 但是,当我为MariaDB设置卷时,在管理员登录页面中出现了一个错误。当我没有设置它们时,它似乎工作得很好。