input
stringlengths
1
12.6k
output
stringlengths
1
12k
<@U01J90KBSU9> Sure!, this is the output ```tar: Removing leading `/' from member names ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /usr/local/bin/pyflyte-execute:8 in &lt;module&gt; │ │ │ │ ❱ 8 │ sys.exit(execute_task_cmd()) │ │ │ │ /usr/local/lib/python3.10/site-packages/click/core.py:1130 in __call__ │ │ │ │ ❱ 1130 │ │ return self.main(*args, **kwargs) │ │ │ │ /usr/local/lib/python3.10/site-packages/click/core.py:1055 in main │ │ │ │ ❱ 1055 │ │ │ │ │ rv = self.invoke(ctx) │ │ │ │ /usr/local/lib/python3.10/site-packages/click/core.py:1404 in invoke │ │ │ │ ❱ 1404 │ │ │ return ctx.invoke(self.callback, **ctx.params) │ │ │ │ /usr/local/lib/python3.10/site-packages/click/core.py:760 in invoke │ │ │ │ ❱ 760 │ │ │ │ return __callback(*args, **kwargs) │ │ │ │ /usr/local/lib/python3.10/site-packages/flytekit/bin/entrypoint.py:471 in │ │ execute_task_cmd │ │ │ │ ❱ 471 │ _execute_task( │ │ │ │ /usr/local/lib/python3.10/site-packages/flytekit/exceptions/scopes.py:160 in │ │ system_entry_point │ │ │ │ ❱ 160 │ │ │ │ return wrapped(*args, **kwargs) │ │ │ │ /usr/local/lib/python3.10/site-packages/flytekit/bin/entrypoint.py:347 in │ │ _execute_task │ │ │ │ ❱ 347 │ │ _task_def = resolver_obj.load_task(loader_args=resolver_args) │ │ │ │ /usr/local/lib/python3.10/site-packages/flytekit/core/utils.py:295 in │ │ wrapper │ │ │ │ ❱ 295 │ │ │ │ return func(*args, **kwargs) │ │ │ │ /usr/local/lib/python3.10/site-packages/flytekit/core/python_auto_container. │ │ py:235 in load_task │ │ │ │ ❱ 235 │ │ task_module = importlib.import_module(name=task_module) # typ │ │ │ │ /usr/local/lib/python3.10/importlib/__init__.py:126 in import_module │ │ │ │ ❱ 126 │ return _bootstrap._gcd_import(name[level:], package, level) │ │ in _gcd_import:1050 │ │ in _find_and_load:1027 │ │ in _find_and_load_unlocked:1006 │ │ in _load_unlocked:688 │ │ in exec_module:883 │ │ in _call_with_frames_removed:241 │ │ │ │ /root/flyte3.py:7 in &lt;module&gt; │ │ │ │ ❱ 7 from flytekitplugins.spark import Spark │ ╰──────────────────────────────────────────────────────────────────────────────╯ ModuleNotFoundError: No module named 'flytekitplugins.spark' ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /usr/local/bin/pyflyte-fast-execute:8 in &lt;module&gt; │ │ │ │ ❱ 8 │ sys.exit(fast_execute_task_cmd()) │ │ │ │ /usr/local/lib/python3.10/site-packages/click/core.py:1130 in __call__ │ │ │ │ ❱ 1130 │ │ return self.main(*args, **kwargs) │ │ │ │ /usr/local/lib/python3.10/site-packages/click/core.py:1055 in main │ │ │ │ ❱ 1055 │ │ │ │ │ rv = self.invoke(ctx) │ │ │ │ /usr/local/lib/python3.10/site-packages/click/core.py:1404 in invoke │ │ │ │ ❱ 1404 │ │ │ return ctx.invoke(self.callback, **ctx.params) │ │ │ │ /usr/local/lib/python3.10/site-packages/click/core.py:760 in invoke │ │ │ │ ❱ 760 │ │ │ │ return __callback(*args, **kwargs) │ │ │ │ /usr/local/lib/python3.10/site-packages/flytekit/bin/entrypoint.py:508 in │ │ fast_execute_task_cmd │ │ │ │ ❱ 508 │ subprocess.run(cmd, check=True) │ │ │ │ /usr/local/lib/python3.10/subprocess.py:526 in run │ │ │ │ ❱ 526 │ │ │ raise CalledProcessError(retcode, process.args, │ ╰──────────────────────────────────────────────────────────────────────────────╯ CalledProcessError: Command '['pyflyte-execute', '--inputs', '<s3://my-s3-bucket/metadata/propeller/flytesnacks-development-f52f3d40c3cf849a89> 5a/n0/data/inputs.pb', '--output-prefix', '<s3://my-s3-bucket/metadata/propeller/flytesnacks-development-f52f3d40c3cf849a89> 5a/n0/data/0', '--raw-output-data-prefix', '<s3://my-s3-bucket/data/rr/f52f3d40c3cf849a895a-n0-0>', '--checkpoint-path', '<s3://my-s3-bucket/data/rr/f52f3d40c3cf849a895a-n0-0/_flytecheckpoints>', '--prev-checkpoint', '""', '--dynamic-addl-distro', '<s3://my-s3-bucket/flytesnacks/development/U67AW4X3WQLPZXY2HBRFM7GDWY======/scri> pt_mode.tar.gz', '--dynamic-dest-dir', '/root', '--resolver', 'flytekit.core.python_auto_container.default_task_resolver', '--', 'task-module', 'flyte3', 'task-name', 'hello_spark']' returned non-zero exit status 1.```
sorry, actually. you can’t use fast-register if spark task is in the workflow because spark worker won’t download the code. try to use `pyflyte register …` instead.
sorry, actually. you can’t use fast-register if spark task is in the workflow because spark worker won’t download the code. try to use `pyflyte register …` instead.
<@USU6W5ATA> why is that? should it not download the code?
<@USU6W5ATA> why is that? should it not download the code?
when I use fast-register, only driver downloaded code
Hello, is it possible to use Cloud Events feature with bundled Flyte sandbox? I am able to run the cluster and I can see that it connects to Kafka, however there are no events being produced when workflows executions happen. Thanks in advance! I can share some details. I'va added a following config section to flyte-sandbox pod: ``` cloudEvents: enable: true type: kafka kafka: brokers: flyte-sandbox-kafka-cluster:9093 version: "3.3.1" eventsPublisher: topicName: "flyte_event" eventTypes: - all``` Service flyte-sandbox-kafka-cluster:9093 is available, I checked it with telnet. Kafka also works fine: ```kafka-get-offsets.sh --bootstrap-server=flyte-sandbox-kafka-cluster:9093 --topic=flyte_event flyte_event:0:0``` This snippet is from Kafka pod itself. If I change cloudEvents.kafka.brokers to something not available, then Flyte doesn't start with an error so I assume with config above it connects successfuly. How can I troubleshoot this problem? Will be appreciated for any assistance!
will flyte send event to kafka if broker config is set correctly? you can check the log in the flyte-binary pod, there might have some error messages.
will flyte send event to kafka if broker config is set correctly? you can check the log in the flyte-binary pod, there might have some error messages.
No, it doesn't and that's the problem. I cannot find any messages related to cloud events in flyte-binary pod logs.
Is there any container image available to to run flyte single binary on Kubernetes - <https://github.com/flyteorg/flyte/blob/master/flyte-single-binary-local.yaml>
<https://github.com/flyteorg/flyte/pkgs/container/flyte-binary-release>
Hello, How to configure S3 and Postgres from Environment variables in Kubernetes cluster?
Are you trying to deploy Flyte on EKS?
Are you trying to deploy Flyte on EKS?
Yes Samhita I am trying to run flyte in single binary mode on Kubernetes with Postgres and AWS S3 config. Is there any sample config available to test it? Is there a way to run it on a ec2 instance instead of container?
Yes Samhita I am trying to run flyte in single binary mode on Kubernetes with Postgres and AWS S3 config. Is there any sample config available to test it? Is there a way to run it on a ec2 instance instead of container?
Have you looked at <https://github.com/flyteorg/flyte/blob/master/charts/flyte-binary/eks-starter.yaml> file?
Have you looked at <https://github.com/flyteorg/flyte/blob/master/charts/flyte-binary/eks-starter.yaml> file?
<@U057SAH5KA7> there's also a community-maintained guide for the process: <https://github.com/davidmirror-ops/flyte-the-hard-way/>
<@U057SAH5KA7> there's also a community-maintained guide for the process: <https://github.com/davidmirror-ops/flyte-the-hard-way/>
<@U04H6UUE78B> - was following the same. <@U01J90KBSU9> - I have tried the link you have shared, I see no error and container is crashing.
<@U04H6UUE78B> - was following the same. <@U01J90KBSU9> - I have tried the link you have shared, I see no error and container is crashing.
Are you still seeing issues, Abhinay?
Are you still seeing issues, Abhinay?
Yes <@U01J90KBSU9>. We are unable to bring up cluster yet. We are seeing this issue now. ```{"json":{},"level":"warning","msg":"stow configuration section missing, defaulting to legacy s3/minio connection config","ts":"2023-05-17T10:14:52Z"} {"json":{},"level":"warning","msg":"stow configuration section missing, defaulting to legacy s3/minio connection config","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"warning","msg":"Failed to create cluster resources for namespace [flytesnacks-development] with err: Failed to read config template dir [flytesnacks-development] for namespace [] with err: open : no such file or directory","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"warning","msg":"Failed to create cluster resources for namespace [flytesnacks-staging] with err: Failed to read config template dir [flytesnacks-staging] for namespace [] with err: open : no such file or directory","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"warning","msg":"Failed to create cluster resources for namespace [flytesnacks-production] with err: Failed to read config template dir [flytesnacks-production] for namespace [] with err: open : no such file or directory","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"warning","msg":"Failed cluster resource creation loop with: Failed to read config template dir [flytesnacks-development] for namespace [] with err: open : no such file or directory, Failed to read config template dir [flytesnacks-staging] for namespace [] with err: open : no such file or directory, Failed to read config template dir [flytesnacks-production] for namespace [] with err: open : no such file or directory","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"error","msg":"Failed to initialize certificates for Secrets Webhook. client rate limiter Wait returned an error: context canceled","ts":"2023-05-17T10:14:57Z"} {"json":{},"level":"panic","msg":"Failed to start Propeller, err: failed to create FlyteWorkflow CRD: <http://customresourcedefinitions.apiextensions.k8s.io|customresourcedefinitions.apiextensions.k8s.io> is forbidden: User \"system:serviceaccount:&lt;workspace_name&gt;:flyte\" cannot create resource \"customresourcedefinitions\" in API group \"<http://apiextensions.k8s.io|apiextensions.k8s.io>\" at the cluster scope","ts":"2023-05-17T10:14:57Z"} ```
Yes <@U01J90KBSU9>. We are unable to bring up cluster yet. We are seeing this issue now. ```{"json":{},"level":"warning","msg":"stow configuration section missing, defaulting to legacy s3/minio connection config","ts":"2023-05-17T10:14:52Z"} {"json":{},"level":"warning","msg":"stow configuration section missing, defaulting to legacy s3/minio connection config","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"warning","msg":"Failed to create cluster resources for namespace [flytesnacks-development] with err: Failed to read config template dir [flytesnacks-development] for namespace [] with err: open : no such file or directory","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"warning","msg":"Failed to create cluster resources for namespace [flytesnacks-staging] with err: Failed to read config template dir [flytesnacks-staging] for namespace [] with err: open : no such file or directory","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"warning","msg":"Failed to create cluster resources for namespace [flytesnacks-production] with err: Failed to read config template dir [flytesnacks-production] for namespace [] with err: open : no such file or directory","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"warning","msg":"Failed cluster resource creation loop with: Failed to read config template dir [flytesnacks-development] for namespace [] with err: open : no such file or directory, Failed to read config template dir [flytesnacks-staging] for namespace [] with err: open : no such file or directory, Failed to read config template dir [flytesnacks-production] for namespace [] with err: open : no such file or directory","ts":"2023-05-17T10:14:55Z"} {"json":{},"level":"error","msg":"Failed to initialize certificates for Secrets Webhook. client rate limiter Wait returned an error: context canceled","ts":"2023-05-17T10:14:57Z"} {"json":{},"level":"panic","msg":"Failed to start Propeller, err: failed to create FlyteWorkflow CRD: <http://customresourcedefinitions.apiextensions.k8s.io|customresourcedefinitions.apiextensions.k8s.io> is forbidden: User \"system:serviceaccount:&lt;workspace_name&gt;:flyte\" cannot create resource \"customresourcedefinitions\" in API group \"<http://apiextensions.k8s.io|apiextensions.k8s.io>\" at the cluster scope","ts":"2023-05-17T10:14:57Z"} ```
Are you following the guide that <@U04H6UUE78B> shared?
Are you following the guide that <@U04H6UUE78B> shared?
Yeah. Helm is working, but single binary seems not to be working. Can you share environment variables or config file with dummy values to connect to aws s3 and aws rds postgres instance.
Yeah. Helm is working, but single binary seems not to be working. Can you share environment variables or config file with dummy values to connect to aws s3 and aws rds postgres instance.
<https://github.com/davidmirror-ops/flyte-the-hard-way/blob/main/docs/05-deploy-with-helm.md> is working for you or not working for you?
<https://github.com/davidmirror-ops/flyte-the-hard-way/blob/main/docs/05-deploy-with-helm.md> is working for you or not working for you?
We tried this, this worked. But we want to simplify this even further with env variables injection. Also, what is the significance of ```annotations: <http://eks.amazonaws.com/role-arn|eks.amazonaws.com/role-arn>: "arn:aws:iam::&lt;aws-account-id&gt;:role/flyte-system-role"```
We tried this, this worked. But we want to simplify this even further with env variables injection. Also, what is the significance of ```annotations: <http://eks.amazonaws.com/role-arn|eks.amazonaws.com/role-arn>: "arn:aws:iam::&lt;aws-account-id&gt;:role/flyte-system-role"```
It's for the flyte components. <https://github.com/davidmirror-ops/flyte-the-hard-way/blob/7a1c5be3522272f0f0c83fda258f33b5a7f2f9b0/docs/03-roles-service-accounts.md> &gt; We tried this, this worked. But we want to simplify this even further with env variables injection. <@U04H6UUE78B>, do you know how to do this?
It's for the flyte components. <https://github.com/davidmirror-ops/flyte-the-hard-way/blob/7a1c5be3522272f0f0c83fda258f33b5a7f2f9b0/docs/03-roles-service-accounts.md> &gt; We tried this, this worked. But we want to simplify this even further with env variables injection. <@U04H6UUE78B>, do you know how to do this?
Is there a way to configure flyte without giving role ARN, by giving access to EKS host and port?
Is there a way to configure flyte without giving role ARN, by giving access to EKS host and port?
&gt; Is there a way to configure flyte without giving role ARN, by giving access to EKS host and port? <@U057SAH5KA7> IAM Roles for Service Accounts is the recommended approach, and as far as I can tell the chart itself is designed to accept a Service Account annotated with an IAM role. What's your use case here?
&gt; Is there a way to configure flyte without giving role ARN, by giving access to EKS host and port? <@U057SAH5KA7> IAM Roles for Service Accounts is the recommended approach, and as far as I can tell the chart itself is designed to accept a Service Account annotated with an IAM role. What's your use case here?
We are running light weight data processing pipeline. We already have EKS with decent capacity. So, we are checking if there is a way to ignore ARN as we already provide Kubernetes config.
We are running light weight data processing pipeline. We already have EKS with decent capacity. So, we are checking if there is a way to ignore ARN as we already provide Kubernetes config.
Typically Flyte is deployed on a dedicated EKS cluster, and an IAM role can be used by multiple EKS clusters
Typically Flyte is deployed on a dedicated EKS cluster, and an IAM role can be used by multiple EKS clusters
Okay. In a single EKS cluster setups, is that optional?
Okay. In a single EKS cluster setups, is that optional?
Absolutely! You mean, single worker node?
Absolutely! You mean, single worker node?
No, I mean multiple nodes in a single EKS cluster.
No, I mean multiple nodes in a single EKS cluster.
yes, it's totally fine. The example guide uses two compute (worker) nodes in a cluster
yes, it's totally fine. The example guide uses two compute (worker) nodes in a cluster
Can you share a holy bible of flight which I can follow for config management? Also, is there a way to inject env variables for AWS S3 and Postgres like the way we do for Kubernetes config?
Can you share a holy bible of flight which I can follow for config management? Also, is there a way to inject env variables for AWS S3 and Postgres like the way we do for Kubernetes config?
&gt; Can you share a holy bible of flight which I can follow for config management? Sure but, in this context, what do you mean by `configuration management`?
&gt; Can you share a holy bible of flight which I can follow for config management? Sure but, in this context, what do you mean by `configuration management`?
AWS S3 bucket specific access, and RDS configuration etc.
AWS S3 bucket specific access, and RDS configuration etc.
&gt; Also, is there a way to inject env variables for AWS S3 and Postgres like the way we do for Kubernetes config? All the env vars that I see accepted by the chart are in `values` or under `extraEnvVars` (again in `values`) There is a way to inject env vars but at the task level (using Pod Templates), not that I know for the backend Docs for PodTemplate : <https://docs.flyte.org/en/latest/deployment/configuration/general.html#using-default-k8s-podtemplates>
&gt; Also, is there a way to inject env variables for AWS S3 and Postgres like the way we do for Kubernetes config? All the env vars that I see accepted by the chart are in `values` or under `extraEnvVars` (again in `values`) There is a way to inject env vars but at the task level (using Pod Templates), not that I know for the backend Docs for PodTemplate : <https://docs.flyte.org/en/latest/deployment/configuration/general.html#using-default-k8s-podtemplates>
I am looking to run Flyte on a single binary setup, with postgres and S3 credentials injected in env variables. Thank you for podtemplate link <@U04H6UUE78B> I am looking to run Flyte on a single binary setup, with postgres and S3 credentials injected in env variables. - Is there a way to do this?
I am looking to run Flyte on a single binary setup, with postgres and S3 credentials injected in env variables. Thank you for podtemplate link <@U04H6UUE78B> I am looking to run Flyte on a single binary setup, with postgres and S3 credentials injected in env variables. - Is there a way to do this?
oh credentials you could put them ina K8s secret and mount the secret inject credentials via env vars is not recommended in general see here how a Flyte user uses the ExternalSecrets operator to store and inject credentials: <https://github.com/alexifm/flyte-eks-deployment>
oh credentials you could put them ina K8s secret and mount the secret inject credentials via env vars is not recommended in general see here how a Flyte user uses the ExternalSecrets operator to store and inject credentials: <https://github.com/alexifm/flyte-eks-deployment>
Okay, thank you <@U04H6UUE78B>.
Hello! I was wondering if its possible to modify workflow parameters outside of a task like this: ```@task def print_stuff(input: str): print(input) @workflow def wf1(test: str = "test"): print_stuff(input=f"hello {test}")``` Since `test` is a Promise in that context i am getting weird prints. Is there a way to extract the string value?
Only in a @dynamic workflow
Only in a @dynamic workflow
Thanks :pray: Looks like dynamic workflows are the way to go for a lot of use cases
Thanks :pray: Looks like dynamic workflows are the way to go for a lot of use cases
so workflow code look like python but it’s really more of a dsl. at run time, the all the tasks are overloaded to return promise objects instead so if you run that example, at compile time, i think what you would get is a statically bound string of “hello Promise&lt;…&gt;” dynamic tasks are workflows that are not compiled until run-time. basically if you say to yourself, i don’t know what the structure of the workflow will be until i know the inputs, then you definitely want a dynamic task. a more concrete example ```@workflow def wf(a: int) for i in range(a): ... # this fails. @dynamic def dwf(a: int) for i in range(a): ... # this succeeds. # however tasks in a dynamic task still produce promises @dynamic def dwf() a = task_that_produces_an_int() for i in range(a): ... # this still fails.``` in your case however, can you not `test` as a separate input to the task?
so workflow code look like python but it’s really more of a dsl. at run time, the all the tasks are overloaded to return promise objects instead so if you run that example, at compile time, i think what you would get is a statically bound string of “hello Promise&lt;…&gt;” dynamic tasks are workflows that are not compiled until run-time. basically if you say to yourself, i don’t know what the structure of the workflow will be until i know the inputs, then you definitely want a dynamic task. a more concrete example ```@workflow def wf(a: int) for i in range(a): ... # this fails. @dynamic def dwf(a: int) for i in range(a): ... # this succeeds. # however tasks in a dynamic task still produce promises @dynamic def dwf() a = task_that_produces_an_int() for i in range(a): ... # this still fails.``` in your case however, can you not `test` as a separate input to the task?
Thanks for making this clear! For my example i wanted to modify the string before putting it into an input of the type FlyteFile. I think in this case i cannot do it inside of the task.
Hiya, how can I configure the Domain settings? Specifically I’d like to set `Raw data output config` for the whole domain, and have that be the default for tasks in that domain.
cannot… can only do project level. or project &amp; domain level (or project &amp; domain &amp; workflow) level. but can’t do only domain level of specificity.
cannot… can only do project level. or project &amp; domain level (or project &amp; domain &amp; workflow) level. but can’t do only domain level of specificity.
Ah ok… Given the title `Domain settings` and its position on the page, I took these as meaning they were the settings for the whole domain.
Ah ok… Given the title `Domain settings` and its position on the page, I took these as meaning they were the settings for the whole domain.
yeah sorry will this be okay for now? i know it's not ideal. feel free to put in a request, but not sure how it'll get prioritized
yeah sorry will this be okay for now? i know it's not ideal. feel free to put in a request, but not sure how it'll get prioritized
Should be ok for now… I’ll try and find a workaround, otherwise will raise a request, thanks for the help!
Hey! I am trying to create a workflow that uses a map_task which return FlyteDirectory with min_success_ratio set to 0.25 for example , and as a result the returned value will be a List[FlyteDirectory] with a possibility of a None or "(empty)" returned value of failed instances., which, as a result, causes the transformer to fail Wonder if anyone can help me handle this specific case assuming that next step in the wf will expect a list of flyte directories. Thanks!
the Flytedirectory transformer is failing? can you change the list input type from `List[FlyteDirectory]` -&gt; `List[Optional[FlyteDirectory]]` does that work?
the Flytedirectory transformer is failing? can you change the list input type from `List[FlyteDirectory]` -&gt; `List[Optional[FlyteDirectory]]` does that work?
yes it worked ! thanks . didnt know Optional would work. thank you :slightly_smiling_face:
Hello, can any one tell me how to import datasets into demo container?
Hello! You should be able to navigate to the object store at `<http://localhost:30080/minio/browser>`, which has a nice interface that allows you to upload files/objects
Hello! You should be able to navigate to the object store at `<http://localhost:30080/minio/browser>`, which has a nice interface that allows you to upload files/objects
thank you, but how to import that into flyte, what the path should look like
thank you, but how to import that into flyte, what the path should look like
What do you mean import Flyte can access anything from an object store or https etc
What do you mean import Flyte can access anything from an object store or https etc
I actually wrote something about this, see the get_dir task here: <https://gist.github.com/pryce-turner/0a67f86febdc812c9a2a9e739c22eeca> You use the s3 path (<s3://my-s3-bucket/blah>) when creating a FlyteFile/FlyteDirectory
I actually wrote something about this, see the get_dir task here: <https://gist.github.com/pryce-turner/0a67f86febdc812c9a2a9e739c22eeca> You use the s3 path (<s3://my-s3-bucket/blah>) when creating a FlyteFile/FlyteDirectory
I have a csv file in a s3 bucket, how to read that csv inside task and return pandas dataset
I have a csv file in a s3 bucket, how to read that csv inside task and return pandas dataset
```@task def t1(a: FlyteFile) -&gt; pd.DataFrame: # read csv file return pd.read_csv(a) # flyte will download csv file from s3, and read it```
I’m seeing this line in error message of a workflow in the UI ```[4/4] currentAttempt done. Last Error: USER::[1/1] currentAttempt done. Last Error: USER::Traceback ``` Where are these two `currentAttempt` set?
propeller. <https://github.com/flyteorg/flytepropeller/blob/9a4ea000af6bb7b959daa00f26abea7c2e3262e7/pkg/controller/nodes/executor.go#L448>
propeller. <https://github.com/flyteorg/flytepropeller/blob/9a4ea000af6bb7b959daa00f26abea7c2e3262e7/pkg/controller/nodes/executor.go#L448>
Thanks! any documentation on how to set the maxAttempts (or <https://github.com/flyteorg/flytepropeller/blob/9a4ea000af6bb7b959daa00f26abea7c2e3262e7/pkg/controller/nodes/executor.go#L450|MinAttempts>)? Default is 4, I guess?
Thanks! any documentation on how to set the maxAttempts (or <https://github.com/flyteorg/flytepropeller/blob/9a4ea000af6bb7b959daa00f26abea7c2e3262e7/pkg/controller/nodes/executor.go#L450|MinAttempts>)? Default is 4, I guess?
<https://github.com/flyteorg/flyte/blob/d391691c6db314da7298520e4fc83b2f5fe01eb9/kustomize/overlays/eks/flyte/config/propeller/core.yaml#L6> IIRC, minAttempts is set in flytekit. ```@task(retries=1,...```
<https://github.com/flyteorg/flyte/blob/d391691c6db314da7298520e4fc83b2f5fe01eb9/kustomize/overlays/eks/flyte/config/propeller/core.yaml#L6> IIRC, minAttempts is set in flytekit. ```@task(retries=1,...```
interesting, so a task could retry more than `minAttempts` up until `max-workflow-retries`?
Sorry, another question: what is an flyte-way of running a flyte task inside a non-flyte function but avoiding the `promise` handling issue due to the flyte task returning a promise that the non-flyte function doesn’t know how to handle? Example: ```@dynamic def flyte_task(): return something def non_flyte_func(): add_one = 1 + flyte_task() return add_one``` If run remotely as it, it would error out with an error like `Unsupported operand type for +: int and Promise`. In my use case, it’s not easy to convert `non_flyte_func` to flyte task
A fully eager mode is being worked on right now that will allow you to avoid promises all together. But until then there are two options that I know of. 1. Use dynamic tasks to work directly with inputs and then use tasks inside to combine the promises that come from tasks. This sounds like it isn’t an option for you. 2. Try using flyte remote to call registered workflows and wait to inspect outputs. <https://docs.flyte.org/projects/flytekit/en/latest/remote.html|https://docs.flyte.org/projects/flytekit/en/latest/remote.html>.
A fully eager mode is being worked on right now that will allow you to avoid promises all together. But until then there are two options that I know of. 1. Use dynamic tasks to work directly with inputs and then use tasks inside to combine the promises that come from tasks. This sounds like it isn’t an option for you. 2. Try using flyte remote to call registered workflows and wait to inspect outputs. <https://docs.flyte.org/projects/flytekit/en/latest/remote.html|https://docs.flyte.org/projects/flytekit/en/latest/remote.html>.
Thanks! I actually prefer option 1. I’ll try to refactor the code a little bit
Thanks! I actually prefer option 1. I’ll try to refactor the code a little bit
Good luck! If you run into issues feel free to post more code and I’ll try to assist.
Good luck! If you run into issues feel free to post more code and I’ll try to assist.
thanks! Option1 worked! I basically wrapped add_one in another tasks so like ```@dynamic def flyte_task(): return something @task def add_one(x): return x + 1 def non_flyte_func(): add_one = add_one(x=flyte_task()) return add_one```
Hm… seems like PodTemplate shouldn’t use the same ServiceAccount as Propeller itself… <https://github.com/flyteorg/flyte/blob/95a083fc31d01951983a2fe223646a7b6a3b6a01/charts/flyte-core/templates/propeller/manager.yaml#L52> (User task Pods should use a different ServiceAccount+IAM Role than Propellor does.) I guess the only way to control that is via the cluster resource manager “templates”? That or separately creating an additional PodTemplate in every namespace? Is there any documentation about how the cluster_resources and templates work? Like, how do I use it to change something which doesn’t have a predictable name (specifically, the user task Pods)?
<@U057E7YS7PG> firstly welcome to the community- having a hard time following. Cc <@U029U35LRDJ> Hey Shannon let me DM you too
<@U057E7YS7PG> firstly welcome to the community- having a hard time following. Cc <@U029U35LRDJ> Hey Shannon let me DM you too
<@U057E7YS7PG> can you expand on this? The `PodTemplate` that is linked is for <https://docs.flyte.org/en/latest/deployment/configuration/performance.html#automatic-scale-out|propeller manager>. Basically, in large-scale scenarios it may be beneficial to have multiple propeller instances and shard the workflows between them. So the service account here is for each of the managed propeller instances. Are you looking for the <https://docs.flyte.org/en/latest/deployment/configuration/general.html#using-default-k8s-podtemplates|default PodTemplate work>? This allows Flyte to use a pre-defined `PodTemplate`as the base for all k8s tasks. We have extended this work quite a bit to <https://github.com/flyteorg/flyte/issues/3123|allow PodTemplates per task> - (<https://github.com/flyteorg/flyte/pull/3391|official docs are currently incomplete>).
<@U057E7YS7PG> can you expand on this? The `PodTemplate` that is linked is for <https://docs.flyte.org/en/latest/deployment/configuration/performance.html#automatic-scale-out|propeller manager>. Basically, in large-scale scenarios it may be beneficial to have multiple propeller instances and shard the workflows between them. So the service account here is for each of the managed propeller instances. Are you looking for the <https://docs.flyte.org/en/latest/deployment/configuration/general.html#using-default-k8s-podtemplates|default PodTemplate work>? This allows Flyte to use a pre-defined `PodTemplate`as the base for all k8s tasks. We have extended this work quite a bit to <https://github.com/flyteorg/flyte/issues/3123|allow PodTemplates per task> - (<https://github.com/flyteorg/flyte/pull/3391|official docs are currently incomplete>).
Ah, I see now that it’s different from Helm’s `configmap.core.manager.pod-template-name` … that one is under `configmap.k8s.default-pod-template-name` I guess? Gotcha, so user tasks do use “default” ServiceAccount. My mistake, I wasn’t seeing how the Helm values map to the stuff in the docs. I assume Flyte automatically creates/customizes new Service Accounts in namespaces it dynamically creates, since I see the “default” one being customized in the templates? What if I want it to automatically create/use a ServiceAccount with a different name? Is that possible?
Happy Friday, flyte community! Is there a way to use `with_override` or something else to override only task container image without creating a Pod from scratch (I’d prefer to reuse existing pod config but with new image)?
yes, we’re going to support `override(container_image=…)` <https://flyte-org.slack.com/archives/CP2HDHKE1/p1683604549675059>
yes, we’re going to support `override(container_image=…)` <https://flyte-org.slack.com/archives/CP2HDHKE1/p1683604549675059>
Sweet, thanks! Looking forward to it!
Sweet, thanks! Looking forward to it!
<@U04UNGML8NB> we should on our end get rid of that dependency conflict and try to stay on the most recent version of flyte, there’s always cool stuff getting introduced in new versions
<@U04UNGML8NB> we should on our end get rid of that dependency conflict and try to stay on the most recent version of flyte, there’s always cool stuff getting introduced in new versions
Agreed!
Agreed!
• <https://github.com/flyteorg/flytekit/releases> • <https://github.com/flyteorg/flyte/releases/tag/v1.6.0> yup, we just cut a release yesterday. feel free to give it a shot PR for image overriding. <https://github.com/flyteorg/flytekit/pull/1652>
• <https://github.com/flyteorg/flytekit/releases> • <https://github.com/flyteorg/flyte/releases/tag/v1.6.0> yup, we just cut a release yesterday. feel free to give it a shot PR for image overriding. <https://github.com/flyteorg/flytekit/pull/1652>
<@U04UNGML8NB> can you describe your use-case a bit more? it’s already possible to run a workflow with tasks that have different images
<@U04UNGML8NB> can you describe your use-case a bit more? it’s already possible to run a workflow with tasks that have different images
Thanks. Sure. I’d like to override the container image on the fly. Currently, it can only be done when the task is defined, if I understand correctly it’d be great if using container image digest is supported too
Thanks. Sure. I’d like to override the container image on the fly. Currently, it can only be done when the task is defined, if I understand correctly it’d be great if using container image digest is supported too
got it okay. and yeah, supporting digest is something that’s been on the radar, but i’m not sure there’s a ticket for it would you mind adding one? and maybe think about how you might want to use it?
got it okay. and yeah, supporting digest is something that’s been on the radar, but i’m not sure there’s a ticket for it would you mind adding one? and maybe think about how you might want to use it?
sure! how do I file a ticket?
sure! how do I file a ticket?
[flyte-core]
[flyte-core]
:star: Create a new Flyte Core Feature issue: <https://github.com/flyteorg/flyte/issues/new?assignees=&amp;labels=enhancement%2Cuntriaged&amp;template=feature_request.yaml&amp;title=%5BCore+feature%5D+>
The doc advice about using `arn:aws:iam::aws:policy/AmazonS3FullAccess` is not really a secure approach… I am trying to figure out which Buckets and/or prefixes each component actually uses, and whether they need read or write. It seems like the `metadata-prefix` and `metadataStoragePrefix` stuff needs to be read/write from the control plane? What about the user tasks? They need read/write too… but can that be restricted to only certain paths that won’t interfere with the control plane? Is there any other detail about read/write &amp; prefix needs at the component level? I see: &gt; None of the Flyte control plane components would access the raw data. So that helps. I assume that’s referring to the `rawoutput-prefix` setting.
correct. `metadata-prefix` -&gt; control plane `rawoutput-prefix` -&gt; data plane
correct. `metadata-prefix` -&gt; control plane `rawoutput-prefix` -&gt; data plane
Yeah, how about read/write on various prefixes, and preventing user tasks from disrupting control plane?
Yeah, how about read/write on various prefixes, and preventing user tasks from disrupting control plane?
You can do that Please suggest updates
You can do that Please suggest updates
I would if I knew how it worked :slightly_smiling_face: I’ll figure it out eventually
Hi everyone, is torch a required dependency of flytekit? Or is it possible to install without the `extras`? ``` File "/home/ttheisen/repos/chariot/py/apps/v2-model-recommender/internal/workflow/secrets.py", line 1, in &lt;module&gt; from flytekit import Secret, current_context File "/home/ttheisen/.cache/pypoetry/virtualenvs/v2-model-recommender-KgGCWpXS-py3.8/lib/python3.8/site-packages/flytekit/__init__.py", line 187, in &lt;module&gt; from flytekit.extras import pytorch File "/home/ttheisen/.cache/pypoetry/virtualenvs/v2-model-recommender-KgGCWpXS-py3.8/lib/python3.8/site-packages/flytekit/extras/pytorch/__init__.py", line 18, in &lt;module&gt; import torch File "/home/ttheisen/.cache/pypoetry/virtualenvs/v2-model-recommender-KgGCWpXS-py3.8/lib/python3.8/site-packages/torch/__init__.py", line 228, in &lt;module&gt; _load_global_deps() File "/home/ttheisen/.cache/pypoetry/virtualenvs/v2-model-recommender-KgGCWpXS-py3.8/lib/python3.8/site-packages/torch/__init__.py", line 189, in _load_global_deps _preload_cuda_deps(lib_folder, lib_name) File "/home/ttheisen/.cache/pypoetry/virtualenvs/v2-model-recommender-KgGCWpXS-py3.8/lib/python3.8/site-packages/torch/__init__.py", line 154, in _preload_cuda_deps raise ValueError(f"{lib_name} not found in the system path {sys.path}") ValueError: libcublas.so.*[0-9] not found in the system path ['/home/ttheisen/repos/chariot/py/apps/v2-model-recommender', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/ttheisen/.cache/pypoetry/virtualenvs/v2-model-recommender-KgGCWpXS-py3.8/lib/python3.8/site-packages'] make: *** [Makefile:34: run] Error 1``` I see this: <https://github.com/flyteorg/flytekit/blob/master/flytekit/extras/pytorch/__init__.py> But a ValueError is being raised (not ImportError or OSError)
you don’t need torch in the flytekit. which version of flytekit are you using? could you share entire logs
you don’t need torch in the flytekit. which version of flytekit are you using? could you share entire logs
&gt; But a ValueError is being raised (not ImportError or OSError) this is the key here. We're being too optimistic in <https://github.com/flyteorg/flytekit/blob/master/flytekit/extras/pytorch/__init__.py#L17-L22>. For some reason in <@U023KFHDRKJ>’s case the import statement raises a `ValueError`. We should catch that just to be safe.
&gt; But a ValueError is being raised (not ImportError or OSError) this is the key here. We're being too optimistic in <https://github.com/flyteorg/flytekit/blob/master/flytekit/extras/pytorch/__init__.py#L17-L22>. For some reason in <@U023KFHDRKJ>’s case the import statement raises a `ValueError`. We should catch that just to be safe.
After upgrading to 1.6.0 I get past the issue I was previously running 1.5.0 for SA
Hi team, I’m using the GetExecutionData API to fetch workflow inputs. Is there an util method in flytekit that I can use to convert a `LiteralMap` back to Python values?
you could use typeEngine. <https://github.com/flyteorg/flytekit/blob/18b212be913cb768a6fcd3a6de8754ce727650ac/tests/flytekit/unit/core/test_type_engine.py#L370>
Hi, I’m getting the following error (failed task with a dataclass/dataclass_json as an input and tuple of numpy arrays as an output) but I can’t seem to make sense of it. Looks like something to do with the task input, but any help with what might be causing this error would be appreciated: ```Traceback (most recent call last): File "/opt/venv/lib/python3.10/site-packages/flytekit/exceptions/scopes.py", line 165, in system_entry_point return wrapped(*args, **kwargs) File "/opt/venv/lib/python3.10/site-packages/flytekit/core/base_task.py", line 518, in dispatch_execute native_inputs = TypeEngine.literal_map_to_kwargs(exec_ctx, input_literal_map, self.python_interface.inputs) File "/opt/venv/lib/python3.10/site-packages/flytekit/core/type_engine.py", line 868, in literal_map_to_kwargs return {k: TypeEngine.to_python_value(ctx, lm.literals[k], python_types[k]) for k, v in lm.literals.items()} File "/opt/venv/lib/python3.10/site-packages/flytekit/core/type_engine.py", line 868, in &lt;dictcomp&gt; return {k: TypeEngine.to_python_value(ctx, lm.literals[k], python_types[k]) for k, v in lm.literals.items()} File "/opt/venv/lib/python3.10/site-packages/flytekit/core/type_engine.py", line 832, in to_python_value return transformer.to_python_value(ctx, lv, expected_python_type) File "/opt/venv/lib/python3.10/site-packages/flytekit/core/type_engine.py", line 606, in to_python_value return self._fix_dataclass_int(expected_python_type, self._deserialize_flyte_type(dc, expected_python_type)) File "/opt/venv/lib/python3.10/site-packages/flytekit/core/type_engine.py", line 589, in _fix_dataclass_int dc.__setattr__(f.name, self._fix_val_int(f.type, val)) File "/opt/venv/lib/python3.10/site-packages/flytekit/core/type_engine.py", line 568, in _fix_val_int ktype, vtype = DictTransformer.get_dict_types(t) Message: not enough values to unpack (expected 2, got 0)```
Would you mind sharing your code snippet?
Would you mind sharing your code snippet?
Sure - this is the failing task ```@task def load_data(ds: SidetrekDataset) -&gt; typing.Tuple[np.ndarray, np.ndarray]: # Load the dataset csv_data = load_dataset(ds, data_type="csv", compression="zip", streaming=False) df = pd.read_csv(csv_data) # df = pd.read_csv((pathlib.Path(__file__).parent / filename).resolve()) # Define X and Y X = df.drop(["fraud"], axis=1) y = df["fraud"] # Standardize the data scaler = StandardScaler() X = scaler.fit_transform(X) return (X, y)``` where `SidetrekDataset` is: ```@dataclass_json @dataclass class SidetrekDataset(object): io: str source: str options: Dict```
Sure - this is the failing task ```@task def load_data(ds: SidetrekDataset) -&gt; typing.Tuple[np.ndarray, np.ndarray]: # Load the dataset csv_data = load_dataset(ds, data_type="csv", compression="zip", streaming=False) df = pd.read_csv(csv_data) # df = pd.read_csv((pathlib.Path(__file__).parent / filename).resolve()) # Define X and Y X = df.drop(["fraud"], axis=1) y = df["fraud"] # Standardize the data scaler = StandardScaler() X = scaler.fit_transform(X) return (X, y)``` where `SidetrekDataset` is: ```@dataclass_json @dataclass class SidetrekDataset(object): io: str source: str options: Dict```
Can you specify `options` dict key and value types? Something like `Dict[str, int]`
Can you specify `options` dict key and value types? Something like `Dict[str, int]`
OK will try - thanks Is `Any` a valid type I can use? <@U01J90KBSU9> For example, `Dict[str, Any]` or will that not work? This dictionary should be able to receive many different types.
OK will try - thanks Is `Any` a valid type I can use? <@U01J90KBSU9> For example, `Dict[str, Any]` or will that not work? This dictionary should be able to receive many different types.
It has to work but it isn't :cry: <@U0265RTUJ5B> When I try to trigger the following workflow on the demo cluster: ```@task def t1(x: Dict[str, Any]) -&gt; Dict[str, Any]: return x @workflow def wf(x: Dict[str, Any] = {"a": 1, "b": "hello"}): t1(x=x)``` I see this error: ```pyflyte run --remote test.py wf {"asctime": "2023-05-17 09:47:13,339", "name": "flytekit", "levelname": "WARNING", "message": "Unsupported Type typing.Any found, Flyte will default to use PickleFile as the transport. Pickle can only be used to send objects between the exact same version of Python, and we strongly recommend to use python type that flyte support."} {"asctime": "2023-05-17 09:47:13,339", "name": "flytekit", "levelname": "WARNING", "message": "Unsupported Type typing.Any found, Flyte will default to use PickleFile as the transport. Pickle can only be used to send objects between the exact same version of Python, and we strongly recommend to use python type that flyte support."} {"asctime": "2023-05-17 09:47:13,339", "name": "flytekit", "levelname": "WARNING", "message": "Unsupported Type typing.Any found, Flyte will default to use PickleFile as the transport. Pickle can only be used to send objects between the exact same version of Python, and we strongly recommend to use python type that flyte support."} Failed with Exception: Reason: USER:AssertionError Underlying Exception: The provided token has expired. Failed to put data from /var/folders/6f/xcgm46ds59j7g__gfxmkgdf80000gn/T/flytevycut24g/control_plane_metadata/local_flytekit/e77e673b91c7b633f9931a10aeea228e to <s3://my-s3-bucket/data/f64272bf89185219386c7978c1df76d9/e77e673b91c7b633f9931a10aeea228e> (recursive=False). Original exception: The provided token has expired.``` Flytekit is trying to upload pickle file in the local environment.
It has to work but it isn't :cry: <@U0265RTUJ5B> When I try to trigger the following workflow on the demo cluster: ```@task def t1(x: Dict[str, Any]) -&gt; Dict[str, Any]: return x @workflow def wf(x: Dict[str, Any] = {"a": 1, "b": "hello"}): t1(x=x)``` I see this error: ```pyflyte run --remote test.py wf {"asctime": "2023-05-17 09:47:13,339", "name": "flytekit", "levelname": "WARNING", "message": "Unsupported Type typing.Any found, Flyte will default to use PickleFile as the transport. Pickle can only be used to send objects between the exact same version of Python, and we strongly recommend to use python type that flyte support."} {"asctime": "2023-05-17 09:47:13,339", "name": "flytekit", "levelname": "WARNING", "message": "Unsupported Type typing.Any found, Flyte will default to use PickleFile as the transport. Pickle can only be used to send objects between the exact same version of Python, and we strongly recommend to use python type that flyte support."} {"asctime": "2023-05-17 09:47:13,339", "name": "flytekit", "levelname": "WARNING", "message": "Unsupported Type typing.Any found, Flyte will default to use PickleFile as the transport. Pickle can only be used to send objects between the exact same version of Python, and we strongly recommend to use python type that flyte support."} Failed with Exception: Reason: USER:AssertionError Underlying Exception: The provided token has expired. Failed to put data from /var/folders/6f/xcgm46ds59j7g__gfxmkgdf80000gn/T/flytevycut24g/control_plane_metadata/local_flytekit/e77e673b91c7b633f9931a10aeea228e to <s3://my-s3-bucket/data/f64272bf89185219386c7978c1df76d9/e77e673b91c7b633f9931a10aeea228e> (recursive=False). Original exception: The provided token has expired.``` Flytekit is trying to upload pickle file in the local environment.
<@U01J90KBSU9> , this is a separate issue, caused by how we handle default values for workflow arguments. What happens if you modify the invocation of `wf` like `pyflyte run --remote run test.py wf --x '{"a":1,"b":"hello"}'` ?
<@U01J90KBSU9> , this is a separate issue, caused by how we handle default values for workflow arguments. What happens if you modify the invocation of `wf` like `pyflyte run --remote run test.py wf --x '{"a":1,"b":"hello"}'` ?
I see `Failed to convert param &lt;Option x&gt;, {'a': 1, 'b': 'hello'} to typing.Dict[str, flytekit.types.pickle.pickle.FlytePickle.__class_getitem__.&lt;locals&gt;._SpecificFormatClass]` error.
I see `Failed to convert param &lt;Option x&gt;, {'a': 1, 'b': 'hello'} to typing.Dict[str, flytekit.types.pickle.pickle.FlytePickle.__class_getitem__.&lt;locals&gt;._SpecificFormatClass]` error.
<@U01J90KBSU9>, <https://github.com/flyteorg/flytekit/pull/1646> fixes this. Before we have a release, can you try running this with flytekit master?
<@U01J90KBSU9>, <https://github.com/flyteorg/flytekit/pull/1646> fixes this. Before we have a release, can you try running this with flytekit master?
Works for me!
Works for me!
Any idea as to when this will be released?
Is there a way to programmatically create a project? (altern. to `flytectl create project`)
yup check out the flytekit.remote
yup check out the flytekit.remote
i can just create project when registering workflow on the flyte
i can just create project when registering workflow on the flyte
? no you have to use <https://github.com/flyteorg/flytekit/blob/993201f5b1fc4398943aef96413fc5850c0bee68/flytekit/clients/friendly.py#L831>
? no you have to use <https://github.com/flyteorg/flytekit/blob/993201f5b1fc4398943aef96413fc5850c0bee68/flytekit/clients/friendly.py#L831>
oh, so it’s `flytekit.client` not `flytekit.remote`
oh, so it’s `flytekit.client` not `flytekit.remote`
Yeah it's `remote.client`; here's an example: <https://github.com/flyteorg/airflow-provider-flyte/blob/bb5d94e7492869f0c0795cdf5a9a87335614b355/flyte_provider/hooks/flyte.py#LL278C18-L278C18> We don't have this available in the remote, but it's easy enough to add!
Yeah it's `remote.client`; here's an example: <https://github.com/flyteorg/airflow-provider-flyte/blob/bb5d94e7492869f0c0795cdf5a9a87335614b355/flyte_provider/hooks/flyte.py#LL278C18-L278C18> We don't have this available in the remote, but it's easy enough to add!
got it, thank you!
Is there a way, in a task, to make a deck appear in the Flyte console before the task has finished? And potentially update it several times as training progresses?
Not today
Not today
That would be super cool!
That would be super cool!
<@U01DYLVUNJE> in your eager mode demo you listed the tasks that were executed as subtasks. Did this list appear after the eager task was done or have you figured out a way to continuously update the deck?
<@U01DYLVUNJE> in your eager mode demo you listed the tasks that were executed as subtasks. Did this list appear after the eager task was done or have you figured out a way to continuously update the deck?
After. The reason is it assembled in the end
After. The reason is it assembled in the end
yeah, would be awesome to render flyte decks during the execution of a task… another RFC? :smiley: