swift建立桥接
by Claus Höfele
通过克劳斯·霍费尔
The Alexa Voice Service is Amazon’s cloud service that understands natural language and allows users to interact with devices by using their voice. You usually associate Alexa with Amazon’s voice-enabled speakers, such as the Echo, but Alexa can potentially run on any connected device with a microphone and a speaker.
Alexa语音服务是亚马逊的云服务,可以理解自然语言,并允许用户使用自己的语音与设备进行交互。 通常,您可以将Alexa与Amazon的启用语音的扬声器(例如Echo)相关联 ,但是Alexa可以在任何带有麦克风和扬声器的连接设备上运行。
Unlike Apple’s Siri, whose extensions are limited to specific domains, Alexa’s API enables developers to implement a broad range of custom voice services called “skills.” Using Swift allows iOS developers (like me) to expand their existing skill set to include server-side programming and take part in the trend towards voice user interfaces.
与Apple的Siri(扩展名仅限于特定域)不同 ,Alexa的API使开发人员能够实施广泛的自定义语音服务,称为“技能”。 使用Swift可以使iOS开发人员(像我一样)扩展他们现有的技能,以包括服务器端编程,并参与语音用户界面的发展趋势。
Simply put, Alexa sends your skill a JSON message with the user’s intent and your code answers with a JSON message that determines what Alexa will answer to the user.
简而言之,Alexa会根据用户的意图向您的技能发送一条JSON消息,而您的代码会通过JSON消息来回答,该消息确定Alexa将如何回答用户。
Since I prefer to implement this functionality in Swift, I use AlexaSkillsKit, a Swift library that I wrote. It takes care of parsing JSON requests from Amazon, generating the proper responses and providing convenience methods to handle Alexa features.
由于我更喜欢在Swift中实现此功能,因此我使用了AlexaSkillsKit (我编写的Swift库)。 它负责解析来自Amazon的JSON请求,生成正确的响应并提供方便的方法来处理Alexa功能。
The code for your custom skill can run as either a stand-alone web service or an AWS Lambda function. Using Lambda, Amazon’s serverless computing platform, Amazon will take care of scaling and running your Swift code — this is the reason I’ll use this deployment type for the finished skill. As you’ll see, however, the web service option is really useful while developing your skill.
您的自定义技能的代码可以作为独立的Web服务或AWS Lambda函数运行。 通过使用Amazon的无服务器计算平台Lambda,Amazon将负责扩展和运行您的Swift代码-这就是为什么我将这种部署类型用于最终技能的原因。 如您所见,在开发技能时,Web服务选项确实非常有用。
Note that out of the box, Lambda only supports code written in JavaScript (Node.js), Python, and Java. But it’s easy to extend this to executables written in any programming language you want. My article Serverless Swift provides a step-by-step guide on how to do this.
请注意,Lambda开箱即用,仅支持以JavaScript(Node.js),Python和Java编写的代码。 但是,很容易将其扩展到以所需的任何编程语言编写的可执行文件。 我的文章Serverless Swift提供了有关如何执行此操作的分步指南。
To summarize, you’ll need the following for your Alexa skill:
总而言之,您需要具备以下Alexa技能:
An implementation of your skill’s functionality in Swift using AlexaSkillsKit
使用AlexaSkillsKit在Swift中实现您的技能功能
A Lambda function set up with your Swift code using the AWS Console
使用AWS控制台使用Swift代码设置的Lambda函数
An Alexa Skill configured in the Alexa Console that triggers your Lambda function
在Alexa控制台中配置的Alexa技能可触发您的Lambda功能
Note that the Alexa Console and the AWS Console are two separate services that you need to sign up for.
请注意,Alexa控制台和AWS控制台是您需要注册的两个单独的服务。
To simplify your first steps, I created a repo with a sample app. swift-lambda-app contains code and scripts to quickly get you started with writing a custom Alexa skill in Swift and deploying it to AWS Lambda.
为了简化您的第一步,我创建了一个带有示例应用程序的仓库。 swift-lambda-app包含代码和脚本,可帮助您快速入门,在Swift中编写自定义Alexa技能并将其部署到AWS Lambda。
The sample app uses a standard Swift Package Manager directory layout and package file thus swift build, swift test and swift package generate-xcodeproj work as expected. Check out the SPM documentation for more info.
该示例应用程序使用标准的Swift Package Manager目录布局和软件包文件,因此swift build , swift test和swift package generate-xcodeproj可以按预期工作。 查看SPM文档以获取更多信息。
There are three targets:
有三个目标:
AlexaSkill: this is a library with the code that implements the custom Alexa skill. It’s a separate library so it can be used by the other two targets. Also, libraries have ENABLE_TESTABILITY
enabled by default which allows you to access internal methods and properties in your unit tests.
AlexaSkill :这是一个包含实现自定义Alexa技能的代码的库。 这是一个单独的库,因此可以供其他两个目标使用。 另外,库默认情况下启用ENABLE_TESTABILITY
,这使您可以访问单元测试中的内部方法和属性。
Lambda: The command line executable for deployment to Lambda. This program uses stdin and stdout for processing data.
Lambda :用于部署到Lambda的命令行可执行文件。 该程序使用stdin和stdout处理数据。
Server (macOS only): To simplify implementing a custom Alexa Skill, the server provides an HTTP interface to the AlexaSkill target. This HTTP server can be exposed publicly via ngrok and configured in the Alexa console, which enables you to develop and debug an Alexa skill with code running on your development computer. This target is macOS only because it wasn’t possible to cleanly separate target dependencies and I didn’t want to link libraries intended for server development to the Lambda executable used for deployment.
服务器 (仅适用于macOS):为简化实施自定义Alexa Skill的过程,服务器为AlexaSkill目标提供了HTTP接口。 可以通过ngrok公开公开此HTTP服务器,并在Alexa控制台中对其进行配置,这使您可以使用在开发计算机上运行的代码来开发和调试Alexa技能。 该目标仅是macOS,因为无法完全分离目标依赖性,并且我不想将用于服务器开发的库链接到用于部署的Lambda可执行文件。
For development, I recommend a Test-driven Development approach against the library target, because this results in the quickest turnaround for code changes. Uploading to Lambda to quickly verify changes isn’t really an option because of slow uploading times. Exposing your functionality via HTTPS as described below, however, enables you to test and debug your functionality in a slightly different way.
对于开发,我建议针对库目标使用“ 测试驱动的开发”方法,因为这将导致最快的代码更改周转时间。 由于上载时间较慢,因此上载至Lambda以快速验证更改并不是真正的选择。 但是,如下所述通过HTTPS公开功能,可以使您以稍微不同的方式测试和调试功能。
Start with implementing the RequestHandler protocol. AlexaSkillsKit parses requests from Alexa and passes the data on to methods required by this protocol.
从实现RequestHandler协议开始。 AlexaSkillsKit解析来自Alexa的请求,并将数据传递给此协议所需的方法。
public protocol RequestHandler { func handleLaunch(request: LaunchRequest, session: Session, next: @escaping (StandardResult) -> ())
func handleIntent(request: IntentRequest, session: Session, next: @escaping (StandardResult) -> ())
func handleSessionEnded(request: SessionEndedRequest, session: Session, next: @escaping (VoidResult) -> ())}
For example, a launch request would result in AlexaSkillsKit calling the handleLaunch() method.
例如, 启动请求将导致AlexaSkillsKit调用handleLaunch()方法。
import Foundationimport AlexaSkillsKit
public class AlexaSkillHandler : RequestHandler { public init() {} public func handleLaunch(request: LaunchRequest, session: Session, next: @escaping (StandardResult) -> ()) { let standardResponse = generateResponse(message: "Hello Swift") next(.success(standardResponse: standardResponse, sessionAttributes: session.attributes)) }}
In the request handler, your custom skill can implement any logic your skill requires. To enable asynchronous code (for example calling another HTTP service), the result is passed on via the next callback. next takes a enum that’s either .success and contains an Alexa response or .failure in case a problem occurred.
在请求处理程序中,您的自定义技能可以实现您的技能所需的任何逻辑。 要启用异步代码(例如,调用另一个HTTP服务),结果将通过下一个回调传递。 next会使用一个.success枚举,并包含一个Alexa响应或.failure ,以防发生问题。
To keep things simple, we’ll pass back a message that Alexa will speak out loud to the user:
为简单起见,我们将传递一条消息,告知Alexa将向用户大声说出:
func generateResponse(message: String) -> StandardResponse { let outputSpeech = OutputSpeech.plain(text: message) return StandardResponse(outputSpeech: outputSpeech)}
Invocation of a RequestHandler as part of a Swift server is done via Amazon’s HTTPS API where the Alexa service calls your server with a POST request. In the following code, Kitura is used as a web framework, but any other web framework would work equally well:
作为Swift服务器一部分的RequestHandler的调用通过Amazon的HTTPS API完成,其中Alexa服务通过POST请求调用您的服务器。 在下面的代码中, Kitura被用作Web框架,但是任何其他Web框架都可以正常工作:
import Foundationimport AlexaSkillsKitimport AlexaSkillimport Kiturarouter.all("/") { request, response, next in var data = Data() let _ = try? request.read(into: &data) let requestDispatcher = RequestDispatcher(requestHandler: AlexaSkillHandler()) requestDispatcher.dispatch(data: data) { result in switch result { case .success(let data): response.send(data: data).status(.OK) case .failure(let error): response.send(error.message).status(.badRequest) } next() }}Kitura.addHTTPServer(onPort: 8090, with: router)Kitura.run()
To run a local HTTPS server:
要运行本地HTTPS服务器:
Make sure the sample builds by running swift build
通过运行swift build确保样本构建
Generate an Xcode project with swift package generate-xcodeproj
使用swift包generate-xcodeproj生成一个Xcode项目
Install ngrok via brew cask install ngrok. This tool allows you to expose a local HTTP server to the internet
通过brew cask安装ngrok 安装ngrok 。 该工具允许您将本地HTTP服务器公开给Internet
Run ngrok http 8090 and copy the HTTPS URL generated by ngrok (it looks similar to https://c4ba192c.ngrok.io)
运行ngrok http 8090并复制ngrok生成的HTTPS URL(看起来类似于https://c4ba192c.ngrok.io)
ngrok exposes your local server to the public internet thus allowing the Alexa Voice Service to call into your custom skill running in Xcode.
ngrok将您的本地服务器公开到公共互联网,从而允许Alexa语音服务调用您在Xcode中运行的自定义技能。
To hook up your custom skill to Alexa:
要将您的自定义技能与Alexa联系起来:
Go to the Alexa console and create a new skill
转到Alexa控制台并创建一项新技能
Intent: {"intents": [{"intent": "TestIntent"}]}
意图: {"intents": [{"intent": "TestIntent"}]}
Service endpoint type: HTTPS (use the URL from ngrok)
服务端点类型: HTTPS(使用ngrok的URL)
Now you can test the skill in the Alexa Console’s Service Simulator using the utterance “test swift”. This will call your local server allowing you to modify and debug your code while interacting with the Alexa service.
现在,您可以使用发声“测试快捷方式”在Alexa Console的服务模拟器中测试技能。 这将调用您的本地服务器,从而允许您在与Alexa服务进行交互时修改和调试代码。
Before uploading to Lambda, it’s worthwhile to run your unit tests in a Linux environment and run integration tests that simulate the execution environment. The sample provides run-unit-tests.sh to do the former and run-integration-tests.sh to do the latter.
在上载到Lambda之前,值得在Linux环境中运行单元测试并运行模拟执行环境的集成测试。 样品提供run-unit-tests.sh做前者, run-integration-tests.sh做后者。
run-unit-tests.sh builds and tests the Lambda target inside a Swift Docker container based on Ubuntu because there's currently no Swift compiler for Amazon Linux (based on RHEL). Executables built on different Linux distributions are compatible with each other if you provide all dependencies necessary to run the program. For this reason, the script captures all shared libraries required to run the executable using ldd.
run-unit-tests.sh在基于Ubuntu的Swift Docker容器内构建和测试Lambda目标,因为当前没有针对Amazon Linux(基于RHEL)的Swift编译器。 如果您提供运行程序所需的所有依赖关系,则在不同Linux发行版上构建的可执行文件将相互兼容。 因此,脚本使用ldd捕获运行可执行文件所需的所有共享库。
To prove that the resulting package works, run-integration-tests.sh runs a release build of the Swift code inside a Docker container that comes close to Lambda’s execution environment (unfortunately, Amazon only provides a few Docker images that don't necessarily match what Lambda is using).
为了证明最终的软件包有效, run-integration-tests.sh在Docker容器内运行了Swift代码的发行版,该容器与Lambda的执行环境非常接近(不幸的是, 亚马逊仅提供了一些不一定与Docker匹配的Docker映像) Lambda正在使用什么)。
The integration with Lambda is done via a small Node.js script that uses the child_process module to run the Swift executable. The script follows Amazon's recommendations to run arbitrary executables in AWS Lambda.
与Lambda的集成是通过一个小的Node.js脚本完成的,该脚本使用child_process模块运行Swift可执行文件。 该脚本遵循Amazon的建议,以在AWS Lambda中运行任意可执行文件 。
After configuring Travis, you can run the same integration script also for every commit.
配置Travis之后 ,您还可以为每个提交运行相同的集成脚本。
For Lambda, you need to create an executable that takes input from stdin and writes output to stdout. This can be done with the following code:
对于Lambda,您需要创建一个可执行文件,该可执行文件从stdin接收输入并将输出写入stdout 。 可以使用以下代码完成此操作:
import Foundationimport AlexaSkillsKitimport AlexaSkilldo { let data = FileHandle.standardInput.readDataToEndOfFile() let requestDispatcher = RequestDispatcher(requestHandler: AlexaSkillHandler()) let responseData = try requestDispatcher.dispatch(data: data) FileHandle.standardOutput.write(responseData)} catch let error as MessageError { let data = error.message.data(using: .utf8) ?? Data() FileHandle.standardOutput.write(data)}
Note that this code uses the same RequestHandler that was used for the HTTP server thus minimizing the differences to the development environment.
请注意,此代码使用与HTTP服务器相同的RequestHandler ,从而最大程度地减少了与开发环境的差异。
To deploy your code to Lambda:
要将代码部署到Lambda:
Run run-integration-tests.sh to produce a zip file at .build/lambda/lambda.zip with all required files to upload to Lambda
运行run-integration-tests.sh以在.build / lambda / lambda.zip中生成一个zip文件,其中包含所有需要上传到Lambda的文件
Create a new Lambda function in the AWS Console in the US East/N. Virginia region (for Europe use EU/Ireland)
在美国东部/ 北美洲的AWS控制台中创建一个新的Lambda函数。 弗吉尼亚州地区(用于欧洲使用欧盟/爱尔兰)
Once you uploaded the Lambda function, you can use the test actions in the AWS Console, for example by using a Start Session action.
上传Lambda函数后,您可以在AWS控制台中使用测试操作,例如,通过使用“开始会话”操作。
After creating the Lambda function, you can now create an Alexa skill. If you have previously created an Alexa skill for the local HTTP server — the only difference is the service endpoint:
创建Lambda函数后,您现在可以创建Alexa技能。 如果您以前为本地HTTP服务器创建了Alexa技能,则唯一的区别是服务端点:
Go to the Alexa console and create a new skill
转到Alexa控制台并创建一项新技能
Intent: {"intents": [{"intent": "TestIntent"}]}
意图: {"intents": [{"intent": "TestIntent"}]}
Service endpoint type: AWS Lambda ARN (use the ARN for the Lambda function from the AWS Console)
服务终端类型: AWS Lambda ARN (将ARN用于来自AWS控制台的Lambda函数)
Now you can test the skill in the Alexa Console using the utterance “test swift”. More details on configuring Alexa skills can be found on Amazon’s developer portal.
现在,您可以使用发声“测试快捷方式”在Alexa控制台中测试技能。 有关配置Alexa技能的更多详细信息,请参见Amazon的开发人员门户 。
Check out the swift-lambda-app repo on GitHub for the code and scripts to develop and deploy a simple Alex skill in Swift. In future articles, I’ll provide more details on how to write useful skills. Meanwhile, you can browse Amazon’s documentation or contact me on Twitter if you have any question.
在GitHub上查看swift-lambda-app回购,获取用于在Swift中开发和部署简单Alex技能的代码和脚本。 在以后的文章中,我将提供有关如何编写有用技能的更多详细信息。 同时,如果有任何疑问,您可以浏览Amazon的文档或在Twitter上与我联系。
翻译自: https://www.freecodecamp.org/news/building-alexa-skills-in-swift-3d596aa0ee95/
swift建立桥接