lambda-technologies-limited
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -20,37 +20,66 @@ tags:
|
|
20 |
- Prompt-enhancement
|
21 |
---
|
22 |
|
23 |
-
# Model Card for
|
24 |
|
25 |
-
|
26 |
|
27 |
-
|
|
|
28 |
|
29 |
-
|
30 |
|
31 |
### Model Description
|
32 |
|
33 |
-
|
34 |
|
|
|
|
|
|
|
|
|
35 |
|
36 |
|
37 |
- **Developed by:** Future Technologies Limited (Lambda Go Technologies Limited)
|
38 |
-
- **Funded by [optional]:** Future Technologies Limited (Lambda Go Technologies Limited)
|
39 |
-
- **Shared by [optional]:** Future Technologies Limited (Lambda Go Technologies Limited)
|
40 |
- **Model type:** Large Image Generation Model
|
41 |
- **Language(s) (NLP):** English
|
42 |
- **License:** apache-2.0
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
-
-
|
50 |
-
- **Demo [optional]:** [More Information Needed]
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
55 |
|
56 |
### Direct Use
|
@@ -91,15 +120,7 @@ Use the code below to get started with the model.
|
|
91 |
|
92 |
## Training Details
|
93 |
|
94 |
-
### Training Data
|
95 |
-
|
96 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
97 |
-
|
98 |
-
[More Information Needed]
|
99 |
-
|
100 |
-
### Training Procedure
|
101 |
|
102 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
103 |
|
104 |
#### Preprocessing [optional]
|
105 |
|
@@ -108,37 +129,8 @@ Use the code below to get started with the model.
|
|
108 |
|
109 |
#### Training Hyperparameters
|
110 |
|
111 |
-
- **Training regime:**
|
112 |
-
|
113 |
-
#### Speeds, Sizes, Times [optional]
|
114 |
-
|
115 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
116 |
-
|
117 |
-
[More Information Needed]
|
118 |
-
|
119 |
-
## Evaluation
|
120 |
-
|
121 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
122 |
-
|
123 |
-
### Testing Data, Factors & Metrics
|
124 |
-
|
125 |
-
#### Testing Data
|
126 |
-
|
127 |
-
<!-- This should link to a Dataset Card if possible. -->
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Factors
|
132 |
-
|
133 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
134 |
-
|
135 |
-
[More Information Needed]
|
136 |
-
|
137 |
-
#### Metrics
|
138 |
-
|
139 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
140 |
-
|
141 |
-
[More Information Needed]
|
142 |
|
143 |
### Results
|
144 |
|
@@ -160,13 +152,12 @@ Use the code below to get started with the model.
|
|
160 |
|
161 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
162 |
|
163 |
-
- **Hardware Type:**
|
164 |
-
- **Hours used:**
|
165 |
-
- **Cloud Provider:**
|
166 |
-
- **Compute Region:**
|
167 |
-
- **Carbon Emitted:**
|
168 |
|
169 |
-
## Technical Specifications [optional]
|
170 |
|
171 |
### Model Architecture and Objective
|
172 |
|
|
|
20 |
- Prompt-enhancement
|
21 |
---
|
22 |
|
23 |
+
# Model Card for Floral-High-Dynamic-Range
|
24 |
|
25 |
+
## Model Details
|
26 |
|
27 |
+
Floral High Dynamic Range (LIGM):
|
28 |
+
A Large Image Generation Model (LIGM) celebrated for its exceptional accuracy in generating high-quality, highly detailed scenes like never seen before! Derived from the groundbreaking Floral AI Model—renowned for its use in film generation—this model marks a milestone in image synthesis technology.
|
29 |
|
30 |
+
Created by: Future Technologies Limited
|
31 |
|
32 |
### Model Description
|
33 |
|
34 |
+
Floral High Dynamic Range (LIGM) is a state-of-the-art Large Image Generation Model (LIGM) that excels in generating images with stunning clarity, precision, and intricate detail. Known for its high accuracy in producing hyper-realistic and aesthetically rich images, this model sets a new standard in image synthesis. Whether it's landscapes, objects, or scenes, Floral HDR brings to life visuals that are vivid, lifelike, and unmatched in quality.
|
35 |
|
36 |
+
Originally derived from the Floral AI Model, which has been successfully applied in film generation, Floral HDR integrates advanced techniques to handle complex lighting, dynamic ranges, and detailed scene compositions. This makes it ideal for applications where high-resolution imagery and realistic scene generation are critical.
|
37 |
+
|
38 |
+
Designed and developed by Future Technologies Limited, Floral HDR is a breakthrough achievement in AI-driven image generation, marking a significant leap in creative industries such as digital art, film, and immersive media. With the power to create images that push the boundaries of realism and artistic innovation, this model is a testament to Future Technologies Limited's commitment to shaping the future of AI.
|
39 |
+
<!-- Provide a longer summary of what this model is. -->
|
40 |
|
41 |
|
42 |
- **Developed by:** Future Technologies Limited (Lambda Go Technologies Limited)
|
|
|
|
|
43 |
- **Model type:** Large Image Generation Model
|
44 |
- **Language(s) (NLP):** English
|
45 |
- **License:** apache-2.0
|
46 |
|
47 |
+
## Uses
|
48 |
|
49 |
+
Film and Animation Studios
|
50 |
|
51 |
+
Intended Users: Directors, animators, visual effects artists, and film production teams.
|
52 |
+
Impact: This model empowers filmmakers to generate realistic scenes and environments with reduced reliance on traditional CGI and manual artistry. It provides faster production timelines and cost-effective solutions for creating complex visuals.
|
|
|
53 |
|
54 |
+
Game Developers
|
55 |
+
|
56 |
+
Intended Users: Game designers, developers, and 3D artists.
|
57 |
+
Impact: Floral HDR helps create highly detailed game worlds, characters, and assets. It allows developers to save time and resources, focusing on interactive elements while the model handles the visual richness of the environments. This can enhance game immersion and the overall player experience.
|
58 |
+
|
59 |
+
Virtual Reality (VR) and Augmented Reality (AR) Creators
|
60 |
+
|
61 |
+
Intended Users: VR/AR developers, interactive media creators, and immersive experience designers.
|
62 |
+
Impact: Users can quickly generate lifelike virtual environments, helping VR and AR applications appear more realistic and convincing. This is crucial for applications ranging from training simulations to entertainment.
|
63 |
|
64 |
+
Artists and Digital Designers
|
65 |
+
|
66 |
+
Intended Users: Digital artists, illustrators, and graphic designers.
|
67 |
+
Impact: Artists can use the model to generate high-quality visual elements, scenes, and concepts, pushing their creative boundaries. The model aids in visualizing complex artistic ideas in a faster, more efficient manner.
|
68 |
+
|
69 |
+
Marketing and Advertising Agencies
|
70 |
+
|
71 |
+
Intended Users: Creative directors, marketers, advertising professionals, and content creators.
|
72 |
+
Impact: Floral HDR enables agencies to create striking visuals for advertisements, product launches, and promotional materials. This helps businesses stand out in competitive markets by delivering high-impact imagery for campaigns.
|
73 |
+
|
74 |
+
Environmental and Scientific Researchers
|
75 |
+
|
76 |
+
Intended Users: Environmental scientists, researchers, and visual data analysts.
|
77 |
+
Impact: The model can simulate realistic environments, aiding in research areas like climate studies, ecosystem modeling, and scientific visualizations. It provides an accessible tool for researchers to communicate complex concepts through imagery.
|
78 |
+
|
79 |
+
Content Creators and Social Media Influencers
|
80 |
+
|
81 |
+
Intended Users: Influencers, social media managers, and visual content creators.
|
82 |
+
Impact: Social media professionals can create stunning and engaging content for their platforms with minimal effort. The model enhances the visual quality of posts, helping users build a more captivating online presence.
|
83 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
84 |
|
85 |
### Direct Use
|
|
|
120 |
|
121 |
## Training Details
|
122 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
123 |
|
|
|
124 |
|
125 |
#### Preprocessing [optional]
|
126 |
|
|
|
129 |
|
130 |
#### Training Hyperparameters
|
131 |
|
132 |
+
- **Training regime:** bf16 mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
133 |
+
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
134 |
|
135 |
### Results
|
136 |
|
|
|
152 |
|
153 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
154 |
|
155 |
+
- **Hardware Type:** Nividia A100 GPU
|
156 |
+
- **Hours used:** 45k+
|
157 |
+
- **Cloud Provider:** Future Technologies Limited
|
158 |
+
- **Compute Region:** Rajasthan, India
|
159 |
+
- **Carbon Emitted:** 0 (Powered by clean Solar Energy with no harmful or polluting machines used. Environmentally sustainable and eco-friendly!)
|
160 |
|
|
|
161 |
|
162 |
### Model Architecture and Objective
|
163 |
|