当前位置: 首页 > 工具软件 > FATE > 使用案例 >

mnist数据集在FATE上应用

马天逸
2023-12-01

mnist数据集在FATE上应用

**

一、下载mnist数据集

**
我用阿里云盘分享了「MNIST」,复制这段内容打开「阿里云盘」App 即可获取
链接:https://www.aliyundrive.com/s/XUy3CKV9QQS
**

二、转换格式

**
(一)解压,将gz文件转成-ubyte
(二)制作.py文件

  def convert(imgf, labelf, outf, n):
                f = open(imgf, "rb")
                o = open(outf, "w")
                l = open(labelf, "rb")         
                f.read(16)
                l.read(8)
                images = []         
                for i in range(n):
                    image = [ord(l.read(1))]
                    for j in range(28*28):
                        image.append(ord(f.read(1)))
                    images.append(image)         
                for image in images:
                    o.write(",".join(str(pix) for pix in image)+"\n")
                f.close()
                o.close()
                l.close()  
                
convert("train-images.idx3-ubyte", "train-labels.idx1-ubyte",
                    "mnist_train.csv", 60000)  
                    
 convert("t10k-images.idx3-ubyte", "t10k-labels.idx1-ubyte",
                    "mnist_test.csv", 10000)  
 print("Convert Finished!")

(三)将制作好的Python文件与数据集放到一个文件夹下,运行Python文件,得到csv文件。

**

三、分割数据集

(一)把数据集分为两个各有3w条记录的数据集
#第一行最前面加上id,第二行开始加序号,并用逗号作为分隔符。

awk -F'\t' -v OFS=',' '  NR == 1 {print "id",$0; next}  {print (NR-1),$0}' mnist_train.csv > mnist_train_with_id.csv

#将表头的label替换成y,在FATE里label的名字通常为y。

sed -i "s/label/y/g" mnist_train_with_id.csv

#将mnist_train_with_id.csv分割,每一个文件有30001行(一行标题和30000行数据)。会生成两个文件:mnist_train_3w.csvaa和mnist_train_3w.csvab。将两个文件重命名

split -l 30001 mnist_train_with_id.csv mnist_train_3w.csv
mv mnist_train_3w.csvaa mnist_train_3w_a.csv
mv mnist_train_3w.csvab mnist_train_3w_b.csv

sed -i "`cat -n mnist_train_3w_a.csv |head -n 1`" mnist_train_3w_b.csv

四、拷贝数据集

(一)将两个文件(a、b)分别拷贝到两台机器的对应目录里:

/data/projects/fate/examples/data

五、启动服务

说明:启动服务转自为CSDN博主「AI浩」的原创文章,
原文链接:https://blog.csdn.net/hhhhhhhhhhwwwwwwwwww/article/details/118894462

7.系统运维
7.1 服务管理
在目标服务器(192.168.65.161 192.168.65.162)app用户下执行

7.1.1 Eggroll服务管理

source /data/projects/fate/bin/init_env.sh
cd /data/projects/fate/eggroll
sh ./bin/eggroll.sh all start  

  //服务启动;

7.1.1 Eggroll服务管理—1.启动/关闭/查看/重启所有:

sh ./bin/eggroll.sh all start    //服务启动;
sh ./bin/eggroll.sh all  stop  //服务停止;
sh ./bin/eggroll.sh all status  //服务状态查询;
sh ./bin/eggroll.sh all restart  //服务重启;

7.1.1 Eggroll服务管理—2.启动/关闭/查看/重启单个模块(可选:clustermanager,nodemanager,rollsite):

sh ./bin/eggroll.sh clustermanager start              /stop/status/restart

sh ./bin/eggroll.sh clustermanager start  
sh ./bin/eggroll.sh nodemanager start  
sh ./bin/eggroll.sh rollsite start  

7.1.2 Mysql服务管理
启动/关闭/查看/重启mysql服务

cd /data/projects/fate/common/mysql/mysql-8.0.13 //fate1.6
cd /data/projects/fate/common/mysql/mysql-8.0.28 //fate1.7.2
sh ./service.sh start     |stop|status|restart

7.1.3 Fate服务管理

  1. 启动/关闭/查看/重启fate_flow服务
    #fate 1.6
source /data/projects/fate/bin/init_env.sh
cd /data/projects/fate/python/fate_flow
sh service.sh start      |stop|status|restart

#fate 1.7.2

source /data/projects/fate/bin/init_env.sh
cd /data/projects/fate/fateflow/bin
sh service.sh start      |stop|status|restart

如果逐个模块启动,需要先启动eggroll和mysql再启动fateflow,fateflow依赖eggroll的启动。

  1. 启动/关闭/重启fateboard服务
cd /data/projects/fate/fateboard
sh service.sh start       |stop|status|restart

7.2 查看进程和端口
在目标服务器(192.168.65.161 192.168.65.162 )app用户下执行

7.2.1 查看进程
#根据部署规划查看进程是否启动

ps -ef | grep -i clustermanager
ps -ef | grep -i nodemanager
ps -ef | grep -i rollsite
ps -ef | grep -i fate_flow_server.py
ps -ef | grep -i fateboard

7.2.2 查看进程端口
#根据部署规划查看进程端口是否存在

#clustermanager
netstat -tlnp | grep 4670
#nodemanager
netstat -tlnp | grep 4671
#rollsite
netstat -tlnp | grep 9370
#fate_flow_server
netstat -tlnp | grep 9360
#fateboard
netstat -tlnp | grep 8080

7.3 服务日志
服务 日志路径
eggroll /data/projects/fate/eggroll/logs
fate_flow&任务日志 /data/projects/fate/python/logs
fateboard /data/projects/fate/fateboard/logs
mysql /data/logs/mysql/
————————————————

六、模型训练

参照CSDN博主「VV一笑ヽ」的原创文章,原文链接:https://blog.csdn.net/Pure_vv/article/details/115741109
(一)上传数据
1.host方
进入目录

/data/projects/fate/examples/dsl/v2/homo_nn

2.新建upload_data_host.json:

{
    "file": "examples/data/mnist_train_3w_a.csv",
    "head": 1,
    "partition": 10,
    "work_mode": 1,
    "table_name": "homo_mnist_host",
    "namespace": "homo_mnist_host"
}

3.上传数据

(venv) [root@vm_0_1_centos fateboard]# python python /data/projects/fate/python/fate_flow/fate_flow_client.py -f upload -c /data/projects/fate/examples/dsl/v2/homo_nn/upload_data_host.json -drop 1

4.结果

{
“data”: {
“board_url”: “http://192.168.204.128:8080/index.html#/dashboard?job_id=202110212014456956421&role=local&party_id=0”,
“job_dsl_path”: “/data/projects/fate/jobs/202110212014456956421/job_dsl.json”,
“job_id”: “202110212014456956421”,
“job_runtime_conf_on_party_path”: “/data/projects/fate/jobs/202110212014456956421/local/job_runtime_on_party_conf.json”,
“job_runtime_conf_path”: “/data/projects/fate/jobs/202110212014456956421/job_runtime_conf.json”,
“logs_directory”: “/data/projects/fate/logs/202110212014456956421”,
“model_info”: {
“model_id”: “local-0#model”,
“model_version”: “202110212014456956421”
},
“namespace”: “homo_mnist_host”,
“pipeline_dsl_path”: “/data/projects/fate/jobs/202110212014456956421/pipeline_dsl.json”,
“table_name”: “homo_mnist_host”,
“train_runtime_conf_path”: “/data/projects/fate/jobs/202110212014456956421/train_runtime_conf.json”
},
“jobId”: “202110212014456956421”,
“retcode”: 0,
“retmsg”: “success”
}

(二)上传数据
1.guest方
进入目录

/data/projects/fate/examples/dsl/v2/homo_nn

2.新建upload_data_guest.json:

{
    "file": "examples/data/mnist_train_3w_b.csv",
    "head": 1,
    "partition": 10,
    "work_mode": 1,
    "table_name": "homo_mnist_guest",
    "namespace": "homo_mnist_guest"
}

3.上传数据

(venv) [root@vm_0_1_centos fateboard]# python /data/projects/fate/python/fate_flow/fate_flow_client.py -f upload -c /data/projects/fate/examples/dsl/v2/homo_nn/upload_data_guest.json -drop 1

4.结果

{
“data”: {
“board_url”: “http://192.168.204.129:8080/index.html#/dashboard?job_id=202110212027240007551&role=local&party_id=0”,
“job_dsl_path”: “/data/projects/fate/jobs/202110212027240007551/job_dsl.json”,
“job_id”: “202110212027240007551”,
“job_runtime_conf_on_party_path”: “/data/projects/fate/jobs/202110212027240007551/local/job_runtime_on_party_conf.json”,
“job_runtime_conf_path”: “/data/projects/fate/jobs/202110212027240007551/job_runtime_conf.json”,
“logs_directory”: “/data/projects/fate/logs/202110212027240007551”,
“model_info”: {
“model_id”: “local-0#model”,
“model_version”: “202110212027240007551”
},
“namespace”: “homo_mnist_guest”,
“pipeline_dsl_path”: “/data/projects/fate/jobs/202110212027240007551/pipeline_dsl.json”,
“table_name”: “homo_mnist_guest”,
“train_runtime_conf_path”: “/data/projects/fate/jobs/202110212027240007551/train_runtime_conf.json”
},
“jobId”: “202110212027240007551”,
“retcode”: 0,
“retmsg”: “success”
}

(三)构建模型

import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
model = Sequential()
model.add(Dense(512,activation='relu',input_shape=(784,)))
model.add(Dense(256,activation='relu'))
model.add(Dense(10,activation='softmax'))
//得到json格式的模型:
json = model.to_json()
print(json)

注:
1.进入python容器之后,没安装tensorflow和keras的话需要进行安装(如:**
2.注意版本号,我用的VM虚机,请参照我找的版本号匹配表:https://blog.csdn.net/ysk2931/article/details/121132257?spm=1001.2014.3001.5501

pip uninstall tensorflow
pip uninstall tensorflow-cpu
pip uninstall keras
pip uninstall fate-client
pip uninstall numpy



pip install tensorflow==1.14 -i https://pypi.tuna.tsinghua.edu.cn/simple/
pip install keras==2.2.5 -i https://pypi.tuna.tsinghua.edu.cn/simple/
pip install numpy==1.16.4  -i https://pypi.tuna.tsinghua.edu.cn/simple/

**),之后输入python,进入python解释器,直接敲上面的代码即可。

结果如下:

{"class_name": "Sequential", "config": {"name": "sequential_1", "layers": [{"class_name": "Dense", "config": {"name": "dense_1", "trainable": true, "batch_input_shape": [null, 784], "dtype": "float32", "units": 512, "activation": "relu", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "Dense", "config": {"name": "dense_2", "trainable": true, "units": 256, "activation": "relu", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "Dense", "config": {"name": "dense_3", "trainable": true, "units": 10, "activation": "softmax", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}]}, "keras_version": "2.2.4", "backend": "tensorflow"}

(四)修改配置文件
1.DSL简介
为了让任务模型的构建更加灵活,目前 FATE 使用了一套自定的领域特定语言 (DSL) 来描述任务。在 DSL 中,各种模块(例如数据读写 data_io,特征工程 feature-engineering, 回归 regression,分类 classification)可以通向一个有向无环图 (DAG) 组织起来。通过各种方式,用户可以根据自身的需要,灵活地组合各种算法模块。

2.DSL配置文件
注:可以使用示例文件test_homo_dnn_multi_layer_dsl.json,我是新建的文件m_dsl.json

{
  "components": {
    "reader_0": {
      "module": "Reader",
      "output": {
        "data": [
          "data"
        ]
      }
    },
    "dataio_0": {
      "module": "DataIO",
      "input": {
        "data": {
          "data": [
            "reader_0.data"
          ]
        }
      },
      "output": {
        "data": [
          "data"
        ],
        "model": [
          "model"
        ]
      }
    },
    "homo_nn_0": {
      "module": "HomoNN",
      "input": {
        "data": {
          "train_data": [
            "dataio_0.data"
          ]
        }
      },
      "output": {
        "data": [
          "data"
        ],
        "model": [
          "model"
        ]
      }
    }
  }
}

3.运行配置文件
注:不是运行配置文件,而是另外一个配置文件,这个文件叫做,运行配置文件(Submit Runtime Conf),可参考示例文件test_homo_dnn_multi_layer_conf.json 我是新建 m_conf.json

{
    "dsl_version": 2,
    "initiator": {
        "role": "guest",
        "party_id": 9999
    },
    "role": {
        "arbiter": [
            10000
        ],
        "host": [
            10000
        ],
        "guest": [
            9999
        ]
    },
    "job_parameters": {
        "common": {
            "work_mode": 1,
            "backend": 0
        }
    },
    "component_parameters": {
        "common": {
            "dataio_0": {
                "with_label": true
            },
            "homo_nn_0": {
		        "encode_label":true,
                "max_iter": 10,
                "batch_size": -1,
                "early_stop": {
                    "early_stop": "diff",
                    "eps": 0.0001
                },
                "optimizer": {
                    "learning_rate": 0.05,
                    "decay": 0.0,
                    "beta_1": 0.9,
                    "beta_2": 0.999,
                    "epsilon": 1e-07,
                    "amsgrad": false,
                    "optimizer": "Adam"
                },
                "loss": "categorical_crossentropy",
                "metrics": [
                    "Hinge",
                    "accuracy",
                    "AUC"
                ],
                "nn_define": {"class_name": "Sequential", "config": {"name": "sequential", "layers": [{"class_name": "InputLayer", "config": {"batch_input_shape": [null, 784], "dtype": "float32", "sparse": false, "ragged": false, "name": "dense_input"}}, {"class_name": "Dense", "config": {"name": "dense", "trainable": true, "batch_input_shape": [null, 784], "dtype": "float32", "units": 512, "activation": "relu", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "Dense", "config": {"name": "dense_1", "trainable": true, "dtype": "float32", "units": 256, "activation": "relu", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "Dense", "config": {"name": "dense_2", "trainable": true, "dtype": "float32", "units": 10, "activation": "softmax", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}]}, "keras_version": "2.4.0", "backend": "tensorflow"},
                "config_type": "keras"
            }
        },
        "role": {
            "guest": {
                "0": {
                    "dataio_0": {
                        "with_label": true,
                        "output_format": "dense"
                    },
                    "reader_0": {
                        "table": {
                            "name": "homo_mnist_guest",
                            "namespace": "homo_mnist_guest"
                        }
                    }
                }
            },
            "host": {
                "0": {
                    "dataio_0": {
                        "with_label": true
                    },
                    "reader_0": {
                        "table": {
                            "name": "homo_mnist_host",
                            "namespace": "homo_mnist_host"
                        }
                    }
                }
            }
        }
    }
}

说一下要改的和要注意的地方:

initiator,任务发起方,根据自己情况更改

role,角色定义部分,要根据自己部署的party id修改

job_parameters.common.work_mode 0表示单机版,1表示集群版

component_parameters.common.homo_nn_0下面增加"encode_label":true,,表示使用独热编码,否则一会训练时会发生错误

4、提交任务,训练模型
将新建(或修改后)的m_dsl.json和m_conf.json文件保存后,进行任务的提交,进入python容器(命令行显示(app-root) bash-4.2# )后,输入下面命令提交任务:

python /data/projects/fate/python/fate_flow/fate_flow_client.py -f submit_job -c /data/projects/fate/examples/dsl/v2/homo_nn/m_conf.json -d /data/projects/fate/examples/dsl/v2/homo_nn/m_dsl.json

得到的结果:
{
“data”: {
“board_url”: “http://192.168.204.129:8080/index.html#/dashboard?job_id=202110221448478563441&role=guest&party_id=9999”,
“job_dsl_path”: “/data/projects/fate/jobs/202110221448478563441/job_dsl.json”,
“job_id”: “202110221448478563441”,
“job_runtime_conf_on_party_path”: “/data/projects/fate/jobs/202110221448478563441/guest/job_runtime_on_party_conf.json”,
“job_runtime_conf_path”: “/data/projects/fate/jobs/202110221448478563441/job_runtime_conf.json”,
“logs_directory”: “/data/projects/fate/logs/202110221448478563441”,
“model_info”: {
“model_id”: “arbiter-10000#guest-9999#host-10000#model”,
“model_version”: “202110221448478563441”
},
“pipeline_dsl_path”: “/data/projects/fate/jobs/202110221448478563441/pipeline_dsl.json”,
“train_runtime_conf_path”: “/data/projects/fate/jobs/202110221448478563441/train_runtime_conf.json”
},
“jobId”: “202110221448478563441”,
“retcode”: 0,
“retmsg”: “success”
}

七、模型预测

(一)修改配置文件
可以使用示例文件test_homo_dnn_multi_layer_predict_dsl.json,我新建了p_dsl.json

{
  "components": {
    "dataio_0": {
      "input": {
        "data": {
          "data": [
            "reader_0.data"
          ]
        },
        "model": [
          "pipeline.dataio_0.model"
        ]
      },
      "module": "DataIO",
      "output": {
        "data": [
          "data"
        ]
      }
    },
    "homo_nn_0": {
      "input": {
        "data": {
          "test_data": [
            "dataio_0.data"
          ]
        },
        "model": [
          "pipeline.homo_nn_0.model"
        ]
      },
      "module": "HomoNN",
      "output": {
        "data": [
          "data"
        ]
      }
    },
    "reader_0": {
      "module": "Reader",
      "output": {
        "data": [
          "data"
        ]
      }
    }
  }
}

(二)模型部署
进入python容器
根据上一次训练的model version和model id部署模型,指定新建好的p_dsl.json文件

 flow model deploy --model-id arbiter-10000#guest-9999#host-10000#model --model-version 202110221448478563441 --dsl-path /data/projects/fate/examples/dsl/v2/homo_nn/p_dsl.json

注意:如果在运行过程中报错误,请确认一下fate-client是否安装了:

pip install fate-client -i https://pypi.tuna.tsinghua.edu.cn/simple

在安装完成后,提示:
flow初始化

 flow init --ip 192.168.204.129 --port 9380
 flow init --ip 192.168.204.128 --port 9380

初始化结果:

{
“retcode”: 0,
“retmsg”: “Fate Flow CLI has been initialized successfully.”
}

1.修改配置文件
可以使用示例文件test_homo_dnn_multi_layer_predict_dsl.json,我新建了p_dsl.json

{
  "components": {
    "dataio_0": {
      "input": {
        "data": {
          "data": [
            "reader_0.data"
          ]
        },
        "model": [
          "pipeline.dataio_0.model"
        ]
      },
      "module": "DataIO",
      "output": {
        "data": [
          "data"
        ]
      }
    },
    "homo_nn_0": {
      "input": {
        "data": {
          "test_data": [
            "dataio_0.data"
          ]
        },
        "model": [
          "pipeline.homo_nn_0.model"
        ]
      },
      "module": "HomoNN",
      "output": {
        "data": [
          "data"
        ]
      }
    },
    "reader_0": {
      "module": "Reader",
      "output": {
        "data": [
          "data"
        ]
      }
    }
  }
}

2.模型部署

 flow model deploy --model-id arbiter-10000#guest-9999#host-10000#model --model-version 202110221448478563441 --dsl-path /data/projects/fate/examples/dsl/v2/homo_nn/p_dsl.json

模型部署结果:

{
    "data": {
        "arbiter": {
            "10000": 0
        },
        "detail": {
            "arbiter": {
                "10000": {
                    "retcode": 0,
                    "retmsg": "deploy model of role arbiter 10000 success"
                }
            },
            "guest": {
                "9999": {
                    "retcode": 0,
                    "retmsg": "deploy model of role guest 9999 success"
                }
            },
            "host": {
                "10000": {
                    "retcode": 0,
                    "retmsg": "deploy model of role host 10000 success"
                }
            }
        },
        "guest": {
            "9999": 0
        },
        "host": {
            "10000": 0
        },
        "model_id": "arbiter-10000#guest-9999#host-10000#model",
        "model_version": "202110251456362622241"
    },
    "retcode": 0,
    "retmsg": "success"
}

3.运行配置文件
类似于之前的上传数据,新建upload_data_test.json,并处理测试数据集
awk -F’\t’ -v OFS=’,’ ’
NR == 1 {print “id”,$0; next}
{print (NR-1),$0}’ mnist_test.csv > mnist_test_with_id.csv

sed -i “s/label/y/g” mnist_test_with_id.csv

可以参考示例文件test_homo_dnn_multi_layer_predict_conf.json,我新建了文件p_conf.json

{
    "dsl_version": 2,
    "initiator": {
        "role": "guest",
        "party_id": 9999
    },
    "role": {
        "arbiter": [
            10000
        ],
        "host": [
            10000
        ],
        "guest": [
            9999
        ]
    },
    "job_parameters": {
        "common": {
            "work_mode": 1,
            "backend": 0,
            "job_type": "predict",
            "model_id": "arbiter-10000#guest-9999#host-10000#model",
            "model_version": "202110221448478563441"
        }
    },
    "component_parameters": {
        "role": {
            "guest": {
                "0": {
                    "reader_0": {
                        "table": {
                            "name": "homo_mnist_test",
                            "namespace": "homo_mnist_test"
                        }
                    }
                }
            },
            "host": {
                "0": {
                    "reader_0": {
                        "table": {
                            "name": "homo_mnist_test",
                            "namespace": "homo_mnist_test"
                        }
                    }
                }
            }
        }
    }
}

注:

job_parameters.model_id和model_version要根据自己的模型修改,注意是部署之后输出信息中的对应信息!!

component_parameters下host和guest的table name 和namespace要注意修改,下面给出我的test数据集准备过程:

类似于之前的上传数据,
新建upload_data_test.json

{
    "file": "examples/data/mnist_test.csv",
    "head": 1,
    "partition": 10,
    "work_mode": 1,
    "table_name": "homo_mnist_test",
    "namespace": "homo_mnist_test"
}

注意file,table_name和namespace的变化

保存文件后上传数据,首先进入host的python容器:
进入容器后,上传数据

python /data/projects/fate/python/fate_flow/fate_flow_client.py -f upload -c /data/projects/fate/examples/dsl/v2/homo_nn/upload_data_test.json

注:我在host和guest双方都进行了数据上传工作,不确定是否必须这样,还是host方上传即可

4.进行模型预测
guest方发起模型预测任务

python /data/projects/fate/python/fate_flow/fate_flow_client.py -f submit_job -c /data/projects/fate/examples/dsl/v2/homo_nn/p_conf.json -d /data/projects/fate/examples/dsl/v2/homo_nn/p_dsl.json
 类似资料: