Skip to content

Commit

Permalink
Merge pull request #374 from grycap/cont_runtime
Browse files Browse the repository at this point in the history
Add support to native cont runtime and add tests
  • Loading branch information
srisco authored Feb 14, 2022
2 parents 0b7acfd + b7ec03c commit d61b047
Show file tree
Hide file tree
Showing 38 changed files with 2,236 additions and 63 deletions.
21 changes: 20 additions & 1 deletion docs/source/advanced_usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -230,4 +230,23 @@ Finally, if the image is small enough, SCAR allows to upload it in the function

To help with the creation of slim images, you can use `minicon <https://github.com/grycap/minicon>`_.
Minicon is a general tool to analyze applications and executions of these applications to obtain a filesystem that contains all the dependencies that have been detected.
By using minicon the size of the cowsay image was reduced from 170MB to 11MB.
By using minicon the size of the cowsay image was reduced from 170MB to 11MB.

Setting a specific VPC
----------------------

You can also set an specific VPC parameters to configure the network in you lambda functions.
You only have to add the ``vpc`` field setting the subnets and security groups as shown in the
following example::

functions:
aws:
- lambda:
vpc:
SubnetIds:
- subnet-00000000000000000
SecurityGroupIds:
- sg-00000000000000000
name: scar-cowsay
container:
image: grycap/cowsay
163 changes: 163 additions & 0 deletions docs/source/image_env_usage.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
Using Lambda Image Environment
==============================

Scar uses by default the python3.7 Lambda environment using udocker program to execute the containers.
In 2021 AWS added native support to ECR container images. Scar also supports to use this environment
to execute your containers.

To use it you only have to set to ``image`` the lamda ``runtime`` property setting.
You can set it in the scar configuration file::

{
"aws": {
"lambda": {
"runtime": "image"
}
}
}

Or in the function definition file::

functions:
aws:
- lambda:
runtime: image
name: scar-function
memory: 2048
init_script: script.sh
container:
image: image/name

Or event set it as a parameter in the ``init`` scar call::

scar init -f function_def.yaml -rt image

In this case the scar client will prepare the image and upload it to AWS ECR as required by the
Lambda Image Environment.

To use this functionality you should use `supervisor <https://github.com/grycap/faas-supervisor>`_
version 1.5.0 or newer.

Using the image runtime the scar client will build a new container image adding the supervisor and
other needed files to the user provided image. This image will be then uploaded to an ECR registry
to enable Lambda environment to create the function. So the user that executes the scar client
must have the ability to execute the docker commands (be part of the ``docker`` group, see
`docker documentation <https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user>`_)


Use alpine based images
-----------------------

Using the container image environment there is no limitation to use alpine based images (musl based).
You only have to add the ``alpine`` flag in the function definition::

functions:
aws:
- lambda:
runtime: image
name: scar-function
memory: 2048
init_script: script.sh
container:
image: image/name
alpine: true

If you use an alpine based image and you do not set the ``alpine`` flag you will get an execution Error::

Error: fork/exec /var/task/supervisor: no such file or directory

Use already prepared ECR images
--------------------------------

You can also use a previously prepared ECR image instead of building it and and pushing to ECR.
In this case you have to specify the full ECR image name and add set to false the ``create_image``
flag in the function definition::

functions:
aws:
- lambda:
runtime: image
name: scar-function
memory: 2048
init_script: script.sh
container:
image: 000000000000.dkr.ecr.us-east-1.amazonaws.com/scar-function
create_image: false

Do not delete ECR image on function deletion
--------------------------------------------

By default the scar client deletes the ECR image in the function deletion process.
If you want to maintain it for future functions you can modify the scar configuration
file and set to false ``delete_image`` flag in the ecr configuration section::

{
"aws": {
"ecr": {
"delete_image": false
}
}
}

Or set it in the function definition::

functions:
aws:
- lambda:
runtime: image
name: scar-function
memory: 2048
init_script: script.sh
container:
image: image/name
ecr:
delete_image: false

ARM64 support
-------------

Using the container image environment you can also specify the architecture to execute your lambda
function (x86_64 or arm64) setting the architectures field in the function definition. If not set
the default architecture will be used (x86_64)::

functions:
aws:
- lambda:
runtime: image
architectures:
- arm64
name: scar-function
memory: 2048
init_script: script.sh
container:
image: image/name

EFS support
------------

Using the container image environment you can also configure file system access for your Lambda function.
First you have to set the VPC parameters to use the same subnet where the EFS is deployed. Also verify
that the iam role set in the scar configuration has the correct permissions and the Security Groups is
properly configured to enable access to NFS port (see `Configuring file system access for Lambda functions <https://docs.aws.amazon.com/lambda/latest/dg/configuration-filesystem.html>`_).
Then you have to add the ``file_system`` field setting the arns and mount paths of the file systems to mount
as shown in the following example::


functions:
aws:
- lambda:
runtime: image
vpc:
SubnetIds:
- subnet-00000000000000000
SecurityGroupIds:
- sg-00000000000000000
file_system:
- Arn: arn:aws:elasticfilesystem:us-east-1:000000000000:access-point/fsap-00000000000000000
LocalMountPath: /mnt/efs
name: scar-function
memory: 2048
init_script: script.sh
container:
image: image/name

1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Welcome to SCAR's documentation!
configuration
basic_usage
advanced_usage
image_env_usage
api_gateway
batch
prog_model
Expand Down
18 changes: 18 additions & 0 deletions examples/cowsay-ecr/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
FROM ubuntu:16.04

# Include global arg in this stage of the build
ARG FUNCTION_DIR="/var/task"
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}

# Copy function code
COPY awslambdaric ${FUNCTION_DIR}
COPY function_config.yaml ${FUNCTION_DIR}
COPY test.sh ${FUNCTION_DIR}

ENV PATH="${FUNCTION_DIR}:${PATH}"

ENTRYPOINT [ "awslambdaric" ]
CMD [ "faassupervisor.supervisor.main" ]
9 changes: 9 additions & 0 deletions examples/cowsay-ecr/basic-cow.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
functions:
aws:
- lambda:
init_script: test.sh
name: scar-ecr-cowsay
container:
#alpine: true
#image: alpine:3.14
image: ubuntu:20.04
34 changes: 34 additions & 0 deletions examples/cowsay-ecr/function_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
asynchronous: false
boto_profile: default
config_path: ''
container:
environment:
Variables: {}
image: 974349055189.dkr.ecr.us-east-1.amazonaws.com/micafer:latest
timeout_threshold: 10
deployment:
max_payload_size: 52428800
max_s3_payload_size: 262144000
environment:
Variables: {}
description: Automatically generated lambda function
execution_mode: lambda
handler: scar-ecr-cowsay.lambda_handler
init_script: test.sh
invocation_type: RequestResponse
layers: []
log_level: DEBUG
log_type: Tail
memory: 512
name: scar-ecr-cowsay
region: us-east-1
runtime: image
storage_providers: {}
supervisor:
layer_name: faas-supervisor
license_info: Apache 2.0
version: 1.4.2
tags:
createdby: scar
owner: AIDAJNGRVV2UDO2O4TS4O
timeout: 300
4 changes: 4 additions & 0 deletions examples/cowsay-ecr/test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/sh
env
cat ${INPUT_FILE_PATH}
echo "OK1"
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ requests
pyyaml
setuptools>=40.8.0
packaging
docker
2 changes: 1 addition & 1 deletion scar/parser/cli/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ def _parse_lambda_args(cmd_args: Dict) -> Dict:
lambda_arg_list = ['name', 'asynchronous', 'init_script', 'run_script', 'c_args', 'memory',
'timeout', 'timeout_threshold', 'image', 'image_file', 'description',
'lambda_role', 'extra_payload', ('environment', 'environment_variables'),
'layers', 'lambda_environment', 'list_layers', 'log_level', 'preheat']
'layers', 'lambda_environment', 'list_layers', 'log_level', 'preheat', 'runtime']
lambda_args = DataTypesUtils.parse_arg_list(lambda_arg_list, cmd_args)
# Standardize log level if defined
if "log_level" in lambda_args:
Expand Down
1 change: 1 addition & 0 deletions scar/parser/cli/parents.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ def create_function_definition_parser():
function_definition_parser.add_argument("-sv", "--supervisor-version",
help=("FaaS Supervisor version. "
"Can be a tag or 'latest'."))
function_definition_parser.add_argument("-rt", "--runtime", help="Lambda runtime")
# Batch (job definition) options
function_definition_parser.add_argument("-bm", "--batch-memory",
help="Batch job memory in megabytes")
Expand Down
2 changes: 1 addition & 1 deletion scar/parser/cli/subparsers.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ def _add_rm_parser(self):
help="Delete all lambda functions",
action="store_true")
group.add_argument("-f", "--conf-file",
help="Yaml file with the function configuration")
help="Yaml file with the function configuration")

def _add_log_parser(self):
log = self.subparser.add_parser('log',
Expand Down
4 changes: 3 additions & 1 deletion scar/providers/aws/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
from scar.providers.aws.clients.resourcegroups import ResourceGroupsClient
from scar.providers.aws.clients.s3 import S3Client
from scar.providers.aws.clients.ec2 import EC2Client
from scar.providers.aws.clients.ecr import ElasticContainerRegistryClient


class GenericClient():
Expand All @@ -37,7 +38,8 @@ class GenericClient():
'LAMBDA': LambdaClient,
'RESOURCEGROUPS': ResourceGroupsClient,
'S3': S3Client,
'LAUNCHTEMPLATES': EC2Client}
'LAUNCHTEMPLATES': EC2Client,
'ECR': ElasticContainerRegistryClient}

def __init__(self, resource_info: Dict =None):
self.properties = {}
Expand Down
2 changes: 1 addition & 1 deletion scar/providers/aws/batchfunction.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ def _get_container_properties_single_node_args(self):
]
}
if self.batch.get('enable_gpu'):
job_def_args['containerProperties']['resourceRequirements'] = [
job_def_args['resourceRequirements'] = [
{
'value': '1',
'type': 'GPU'
Expand Down
67 changes: 67 additions & 0 deletions scar/providers/aws/clients/ecr.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Copyright (C) GRyCAP - I3M - UPV
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Module with the class necessary to manage the
Cloudwatch Logs creation, deletion and configuration."""

import base64
from datetime import datetime
from typing import Dict
from scar.exceptions import exception
from scar.providers.aws.clients import BotoClient
import scar.logger as logger


class ElasticContainerRegistryClient(BotoClient):
"""A low-level client representing Amazon Elastic Container Registry.
DOC_URL: https://boto3.readthedocs.io/en/latest/reference/services/ecr.html"""

# Parameter used by the parent to create the appropriate boto3 client
_BOTO_CLIENT_NAME = 'ecr'

def __init__(self, client_args: Dict):
super().__init__(client_args)
self.token = None

@exception(logger)
def get_authorization_token(self) -> str:
"""Retrieves an authorization token."""
if self.token:
now = datetime.now()
if self.token['expiresAt'] > (now + 60):
return self.token["authorizationToken"]

response = self.client.get_authorization_token()
self.token = response["authorizationData"][0]
return base64.b64decode(self.token["authorizationToken"]).decode().split(':')

@exception(logger)
def get_registry_id(self) -> str:
response = self.client.describe_registry()
return response["registryId"]

@exception(logger)
def describe_repositories(self, **kwargs: Dict) -> str:
try:
response = self.client.describe_repositories(**kwargs)
return response
except Exception:
return None

@exception(logger)
def create_repository(self, repository_name: str):
return self.client.create_repository(repositoryName=repository_name)

@exception(logger)
def delete_repository(self, repository_name: str):
return self.client.delete_repository(repositoryName=repository_name, force=True)
Loading

0 comments on commit d61b047

Please sign in to comment.