You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the only available backend for OS X leverages NSSpeechSynthesizer directly. While this works, it has some disadvantages, including no support at all for Braille displays. We should add a specific VoiceOver backend in addition to OSXSay.
Rather than trying to control VoiceOver directly using AppleScript, we could leverage NSAccessibility notifications which in turn are handled by VoiceOver. In particular, to make VoiceOver output a string we could use the NSAccessibilityAnnouncementRequestedNotification
This approach could be easily ported to iOS/TV OS, creating a VoiceOver backend for iOS that wouldn't even violate the AppStore rules. The main disadvantage is that Kodi screen reader would not be able to adjust any setting, including speech volume and rate, because they would be managed by VoiceOver.
The main problem is that to post these notifications we need to access functions defined by AppKit, notably NSAccessibilityPostNotificationWithUserInfo(). Is there a reliable way to access them from python?
I could work on this in the next days/weeks, but I need some guidance. Is there a "BackendInterface" that backend classes should conform to? If so, where can I find its documentation? What's the recommended strategy for compyling/testing new backends? I'd love to minimize the risk of loosing speech output from Kodi Screen Reader, since I'm blind.
The text was updated successfully, but these errors were encountered:
falcon03
changed the title
Evaluate adding a specific VoiceOver backend
Consider adding a specific VoiceOver backend for OS X leveraging NSAccessibility notifications
Feb 12, 2016
Currently, the only available backend for OS X leverages NSSpeechSynthesizer directly. While this works, it has some disadvantages, including no support at all for Braille displays. We should add a specific VoiceOver backend in addition to OSXSay.
Rather than trying to control VoiceOver directly using AppleScript, we could leverage NSAccessibility notifications which in turn are handled by VoiceOver. In particular, to make VoiceOver output a string we could use the NSAccessibilityAnnouncementRequestedNotification
https://developer.apple.com/library/mac/documentation/AppKit/Reference/NSAccessibility_Protocol_Reference/index.html#//apple_ref/doc/uid/TP40014985
This approach could be easily ported to iOS/TV OS, creating a VoiceOver backend for iOS that wouldn't even violate the AppStore rules. The main disadvantage is that Kodi screen reader would not be able to adjust any setting, including speech volume and rate, because they would be managed by VoiceOver.
The main problem is that to post these notifications we need to access functions defined by AppKit, notably NSAccessibilityPostNotificationWithUserInfo(). Is there a reliable way to access them from python?
I could work on this in the next days/weeks, but I need some guidance. Is there a "BackendInterface" that backend classes should conform to? If so, where can I find its documentation? What's the recommended strategy for compyling/testing new backends? I'd love to minimize the risk of loosing speech output from Kodi Screen Reader, since I'm blind.
The text was updated successfully, but these errors were encountered: