LinStevenn
's Collections
StableSSM: Alleviating the Curse of Memory in State-space Models through
Stable Reparameterization
Paper
•
2311.14495
•
Published
•
1
Vision Mamba: Efficient Visual Representation Learning with
Bidirectional State Space Model
Paper
•
2401.09417
•
Published
•
59
SegMamba: Long-range Sequential Modeling Mamba For 3D Medical Image
Segmentation
Paper
•
2401.13560
•
Published
•
1
Graph-Mamba: Towards Long-Range Graph Sequence Modeling with Selective
State Spaces
Paper
•
2402.00789
•
Published
•
2
Convolutional State Space Models for Long-Range Spatiotemporal Modeling
Paper
•
2310.19694
•
Published
•
2
Vivim: a Video Vision Mamba for Medical Video Object Segmentation
Paper
•
2401.14168
•
Published
•
2
2-D SSM: A General Spatial Layer for Visual Transformers
Paper
•
2306.06635
•
Published
•
1
BlackMamba: Mixture of Experts for State-Space Models
Paper
•
2402.01771
•
Published
•
23
Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning
Tasks
Paper
•
2402.04248
•
Published
•
30
Graph Mamba: Towards Learning on Graphs with State Space Models
Paper
•
2402.08678
•
Published
•
13
DenseMamba: State Space Models with Dense Hidden Connection for
Efficient Large Language Models
Paper
•
2403.00818
•
Published
•
15
Diffusion Models Without Attention
Paper
•
2311.18257
•
Published
•
2
ZigMa: Zigzag Mamba Diffusion Model
Paper
•
2403.13802
•
Published
•
17
MambaIR: A Simple Baseline for Image Restoration with State-Space Model
Paper
•
2402.15648
•
Published
Scalable Diffusion Models with State Space Backbone
Paper
•
2402.05608
•
Published
LocalMamba: Visual State Space Model with Windowed Selective Scan
Paper
•
2403.09338
•
Published
•
7
VMamba: Visual State Space Model
Paper
•
2401.10166
•
Published
•
38
VideoMamba: State Space Model for Efficient Video Understanding
Paper
•
2403.06977
•
Published
•
27
MambaMixer: Efficient Selective State Space Models with Dual Token and
Channel Selection
Paper
•
2403.19888
•
Published
•
10
VL-Mamba: Exploring State Space Models for Multimodal Learning
Paper
•
2403.13600
•
Published