Falcon RefinedWeb 7b and 1b (=1.3 billion) #70
                  
                    
                      maddes8cht
                    
                  
                
                  started this conversation in
                General
              
            Replies: 1 comment
-
| I've not tested it yet, though it might work. ggllm was written to support any sort of falcon-type architecture. | 
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
-
On the original Falcon release TII Falcon huggingface there are two
refinedWebModel Variants in 7b and 1.3b. Okay, why should anyone use the 7b RW model with a smaller trainingsset when there is the "good" model. But there is also this 1.3b model - can we get this running in falcon-main? It should be superfast, even at really large ontext. Maybe there is some usecase in it.It will run on real small devices.
If one has easy understandable text, nothing complex, just standard, it may even be capable of summarizing text.
Lets imagine:
Having real big chunks of (simple) text beeing crunched and summarized really fast by this model ..
Will this work?
Beta Was this translation helpful? Give feedback.
All reactions